The Way You Look at the World Will Change… Soon December 2, 2011Posted by gordonwatts in ATLAS, CERN, CMS, Higgs, physics.
We are coming up on one of those “lucky to be alive to see this” moments. Sometime in the next year we will all know, one way or the other, if the Standard Model Higgs exists. Or it does not exist. How we think fundamental physics will change. I can’t understate the importance of this. And the first strike along this path will occur on December 13th.
If it does not exist that will force us to tear down and rebuild – in some totally unknown way – our model of physics. Our model that we’ve had for 40+ years now. Imagine that – 40 years and now that it finally meets data… poof! Gone. Or, we will find the Higgs, and we’ll have a mass. Knowing the mass will be in itself interesting, and finding the Higgs won’t change the fact that we still need something more than the Standard Model to complete our description of the universe. But now every single beyond-the-standard model theory will have to incorporate not only electrons, muons, quarks, W’s, Z’s, photons, gluons – at their measured masses, but a Higgs too with the appropriate masses we measure!
Ok, this takes a second to explain. First, when we look for the Higgs we do it as a function of its mass – the theory does not predict exactly how massive it will be. Second, the y-axis is the rate at which the Higgs is produced. When we look for it at a certain mass we make a statement “if the Higgs exists at mass 200 GeV/c2, then it must be happening at a rate less than 0.6 or we would have seen it.” I read the 0.6 off the plot by looking at the placement of the solid black line with the square points – the observed upper limit. The rate, the y-axis, is in funny units. Basically, the red line is the rate you’d expect if it was a standard model Higgs. The solid black line with the square points on it is the combined LHC exclusion line. Combined means ATLAS + CMS results. So, anywhere the solid black line dips below the red horizontal line means that we are fairly confident that the Standard Model Higgs doesn’t exist (BTW – even fairly confident has a very specific meaning here: we are 95% confident). The hatched areas are the areas where the Higgs has already been ruled out. Note the hatched areas at low mass (100 GeV or so) – those are from other experiments like LEP.
Now that is done. A fair question is where would we expect to find the Higgs. As it turns out, a Standard Model Higgs will mostly likely occur at low masses – exactly that region between 114 GeV/c2 and 140 GeV/c2. There isn’t a lot of room left for the Higgs to hide there!! These plots are with 2 fb-1 of data. Both experiments now have about 5 fb-1 of data recorded. And everyone wants to know exactly what they see. Heck, while in each experiment we basically know what we see, we desperately want to know what the other experiment sees. The first unveiling will occur at a joint seminar at 2pm on December 13th. I really hope it will be streamed on the web, as I’ll be up in Whistler for my winder ski vacation!
So what should you look for during that seminar (or in the talks that will be uploaded when the seminar is given)? The above plot will be a quick summary of what the status of the experiments. Each experiment will have an individual one. The key thing to look for is where the dashed line and the solid line deviate significantly. The solid line I’ve already explained – that says that for the HIggs of a particular mass if it is there, it must be at a rate less than what is shown. Now, the dashed line is what we expect – given everything was right – and the Higgs didn’t exist at that mass – that is how good we expect to be. So, for example, right around the 280 GeV/C2 level we expect to be able to see a rate of about 0.6, and that is almost exactly what we measure. Now look down around 120-130 GeV/c2. There you’ll notice that the observed line is well above the solid line. How much – well, it is just along the edge of the yellow band – which means 2 sigma. 2 sigma isn’t very much – so this plot has nothing to get very interested yet. But if one of the plots shown over the next year has a more significant excursion, and you see it in both experiments… then you have my permission to get a little excited. The real test will be if we can get to a 5 sigma excursion.
This seminar is the first step in this final chapter of the old realm of particle physics. We are about to start a new chapter. I, for one, can’t wait!
N.B. I’m totally glossing over the fact that if we do find something in the next year that looks like a Higgs, it will take us sometime to make sure it is a Standard Model Higgs, rather than some other type of Higgs! 2nd order effect, as they say. Also, in that last long paragraph, the sigma’s I’m talking about on the plot and the 5 sigma discovery aren’t the same – so I glossed over some real details there too (and this latter one is a detail I sometimes forget, much to my embarrassment at a meeting the other day!).
Boom! March 30, 2010Posted by gordonwatts in CERN, LHC.
Do not start with a whimper… Start with a…
or perhaps you’d prefer a…
That first event – the small red track (click to enlarge) is actually a muon candidate. Something we almost could never see with 900 GeV collisions from last December – very little was powerful enough to make it out that far.
So, now the real work begins. Soon the press will pack up and we can get down to actually making sense of this fantastic new microscope we’ve been given! It is going to be a fun 18 months of first data!
Collisions March 30, 2010Posted by gordonwatts in CERN, LHC.
Wow. It is almost 5 am. I have a meeting in 3 hours. It is a bit anti-climatic watching it alone in the dark here in Seattle. But still. This is the beginning. The next year will have many more sleepless nights. A job well done by everyone who has worked for so long to see these first collisions – many for 20 years or more!
What do you mean it isn’t about the $$? December 16, 2009Posted by gordonwatts in ATLAS, CERN, LHC, life.
A cute article in Vanity Fair:
Among the defining attributes of now are ever tinier gadgets, ever shorter attention spans, and the privileging of marketplace values above all. Life is manically parceled into financial quarters, three-minute YouTube videos, 140-character tweets. In my pocket is a phone/computer/camera/video recorder/TV/stereo system half the size of a pack of Marlboros. And what about pursuing knowledge purely for its own sake, without any real thought of, um, monetizing it? Cute.
Something I found out from this article – The LHC is the largest machine ever built. Ok. Wow. Ever!? I would have though that something like a large air craft carrier would have beat this. Still.
The attention span is another interesting aspect I’d not thought about. You know that the first space shuttles were using magnetic core memory (see the reference in that Wikipedia particle). There were a number of reasons for this – one of them was certainly there was no better technology available when they started. Before it was built more robust memory appeared – but it was too late to redesign. Later space shuttles were fitted with more modern versions of the memory.
In internet time, 6 months or a year and you are already a version behind. And it matters. It would seem part of the point of the now is to be using the latest and greatest. You know how everyone stands around a water cooler discussing the latest episode of some TV show (i.e. Lost when it first started). Now it is the latest iPhone update or some other cool new gadget. Ops. Hee hee. I said water cooler. How quaint. Obviously, I meant facebook.
Projects like the space shuttle or the LHC take years and years. And a lot of people have to remain focused for that long. And governments who provide the funding. You know how hard that is – especially for a place like the USA where every year they discuss the budget? It is hard. Some people have been working on this for 20 years. 20 years! And now data is finally arriving. Think about that: designs set down 20 years ago have finally been built and installed and integrated and tested.
This science does not operate on internet time. But we are now deep in the age of internet time. How will the next big project fair? Will we as a society have the commitment to get it done?
I like the writing style in this VF article – a cultural look at the LHC. They do a good job of describing the quench as well. I recommend the read. And, finally, yes, this post ended up very different from the way it started.
Thanks to Chris @ UW for bringing this article to my attention.
See CERN History December 1, 2009Posted by gordonwatts in CERN, Fermilab, physics life.
This is a quick note to draw your attention to a small retrospective program that CERN has put together – “From the Proton Synchroton to the Large Hadron Collider – 50 Years of Nobel Memories in High-Energy Physics” – yeah, yeah, it is like a Microsoft product name, but check out the list of speakers – 13 of them are Nobel prize winners. And these are all “memory” talks – so they should be quite entertaining. The event will be video-broadcast over the internet – a link should appear in that agenda page where you can watch. The time is European central – which is 9 hours ahead of Pacific time in the USA.
The context for this event is the turn-on of the LHC, of course. The accelerator recently took the title of “most powerful accelerator in the world” away from Fermilab – and is on its way to a turn-on and real data. Ironically, I was on shift at Fermilab a few hours before this event happened – my plan was to call up the ATLAS control room if it did happen and congratulate them… but I was asleep by the time it actually happened.
I’m at CERN now – and the atmosphere is electric. This review talk is a perfect stepping stone for the future.
Bjarne Stroustrup September 8, 2009Posted by gordonwatts in CERN, computers, ROOT.
If you are even semi-conscious of the computing world you know this name: Bjarne Stroustrup. He is the father of C++. He started designing the language sometime in the very late 1970’s and continues to this day trying to keep it from getting too “weird” (his words).
He visited CERN this last week, invited by the ROOT team (I took few pictures). I couldn’t see his big plenary talk due to a meeting conflict, but my friend Axel, on the ROOT team, was nice enough to invite me along to a smaller discussion. Presentations made at this discussion should be posted soon here. The big lecture is posted here, along with video (sadly, in flash and wmv format – not quite mp4 as I’ve been discussing!!)! I see that Axel also has a blog and he is posting a summary there too – in more detail than I am.
The C++ standard – which defines the language – is currently overseen by a ISO Standards Committee. Collectively they decide on the features and changes to the language. The members are made up of compiler vendors, library vendors, library authors, large banking organizations, Intel, Microsoft, etc. – people who have a little $$ and make heavy use of C++. Even high energy physics is represented – Walter Brown from Fermilab. Apparently the committee membership is basically open – it costs about $10K/year to send someone to all the meetings. That is it. Not very expensive. The committee is currently finishing off a new version of the C++ language, commonly referred to as C++0x.
The visit was fascinating. I’ve always known there was plenty of politics when a group of people get together and try to decide things. Heck, I’m in High Energy Physics! But I guess I’d never given much thought to a programming language! Part of the reason it was as fascinating as it was was because several additions to the language that folks in HEP were interested in were taken out at the last minute – for a variety of reasons – so we were all curious as to what happened.
I learned a whole bunch of things during this discussion (sorry for going technical on everyone here!):
- Bjarne yelled at us multiple times: people like HEP are not well represented on the committee. So join the thing and get views like ours better represented (though he worried if all 150 labs joined at once that might cause a problem).
- In many ways HEP is now pushing several multi-core computing boundaries. Both in numbers of cores we wish to run on and how we use memory. Memory is, in particular, becoming an acute problem. Some support in the standard would be very helpful. Minimal support is going in to the new standard, but Bjarne said, amazingly enough, there are very few people on the committee who are willing to work on these aspects. Many have the attitude that one core is really all that is needed!!! Crazy!
- In particle physics we leak memory like a sieve. Many times our jobs crash because of it. Most of the leaks are pretty simple and a decent garbage collector could efficiently pick up everything and allow our programs to run longer. Apparently this almost made it into the standard until a coalition of the authors of the boost library killed it: if you need a garbage collector then you have a bug; just fix it. Which is all good and glorious in an ideal world, but give me a break! In a 50 million line code base!? One thing Bjarne pointed out was it takes 40 people to get something done on the committee, but it takes only 10 to stop it. Sort of like health insurance.
- Built in support for memory pools would probably be quite helpful here too. The idea is that when you read in a particle physics event you allocated all the data for that event in a special memory pool. The data from an event is pretty self-contained – you don’t need it once you have done processing that event and move onto the next one. If it is all in its own memory pool, then you can just wipe it out all at once – who cares about actually carefully deleting each object. As part of the discussion of why something like this wasn’t in there (scoped allocators sounds like it might be partway there) he mentioned that HP was “on our side”, Intel was “not”, and Microsoft was one of the most aggressive when it came to adding new features to the language.
- I started a discussion of how the STL is used in HEP – pointing out that we make very heavy use of vector and map, and then very little else. Bjarne expressed the general frustration that no one was really writing their own containers. In the ensuing discussion he dissed something that I often make use of – the for_each loop algorithm. His biggest complaint was who much stuff it added – you had to create a whole new class – which involves lots of extra lines of code – and that the code is no longer near where it is being used (non-locality can make source code hard to read). He is right both are problems, but to him they are big enough to nix its used except in rare circumstances. Perhaps I’ll have to re-look at the way I use them.
- He is not a fan of OpenMP. I don’t like it either, but sometimes people trot it out as the only game in town. Surely we know enough to do better now. Tasked based parallelism? By slots?
- Bjarne is very uncomfortable with Lambda’s functions – a short hand way to write one-off functions. To me this is the single best thing being added to the language – it will not be possible to totally avoid having to write another mem_fun or bind2nd template. That is huge, because those things never worked anyway – you could spend hours trying to make the code build, and they added so much cruft to your code you could never understand what you were trying to do in the first place! He is nervous that people will start adding large amounts of code directly into lambda functions – as he said “if it is more than one line, it is important enough to be given a name!!” We’ll have to see how use develops.
- He was pretty dismissive of proprietary languages. Java and C# both were put in this category (both have international standards behind them, just like C++, however) – citing vendor lock-in. But the most venom I detected was when he was discussing the LLVM open source project. This is a C++ interpreter and JIT. This project was loosely run but has now been taken over by Apple – presumably to be, among other things, packaged with their machines. His comment was basically “I used to think that was very good, but now that it has been taken over by Apple I’d have to take a close look at it and see what direction they were taking it.”
- Run Time Type Information. C++ came into its own around 1983 or so. No modern language is without the ability to inspect itself. Given an object, you can usually determine what methods are on the object, what the arguments of those methods are, etc. – and most importantly, build a call to that method without having ever seen the code in source form. C++ does not have it. We all thought there was a big reason this wasn’t the case. The real reason: no one has pushed hard enough or is interested enough on the committee. For folks doing dynamic coding or writing interpreters this is crucial. We have to do that in our code and adding the information in after-the-fact is cumbersome and causes code bloat. Apparently we just need to pack the C++ committee!
Usually as someone rises in importance in their field they get more and more diplomatic – it is almost a necessity. If that is the case, Bjarne must have been pretty rough when he was younger! It was great to see someone who was attempting to steer-by-committee something he invented vent his frustrations, show his passion, name names, and at one point threaten to give out phone numbers (well, not really, but he almost gave out phone numbers). He can no longer steer the language exactly as he wants it, but he is clearly still very much guiding it.
You can find slides that were used to guide the informal discussion here. I think archived video from the plenary presentation will appear linked to here eventually if you are curious.
Energy vs Power vs Heat vs Oh no! July 5, 2009Posted by gordonwatts in CERN, LHC.
Last post I mentioned the LHC update that was given at a recent meeting at CERN. One cool thing Steve Myers’ showed during his talk was a discussion of the quality of the splices and how it might affect the LHC’s ability to run.
For a sample of the trade-off, check out this plot, stolen from page 46 in the talk.
Along the x axis is the measured resistance between two magnets (across the splice). The units there are nano-ohms – something only the most expensive multimeters can measure. If you remember your Physics 101 course, you remember P=I^2R (power is current squared times resistance). The units of P are Watts (!) – just like your light bulb. These are superconducting magnets, of course. The magnets are very powerful and so have 1000’s of amps of current flowing through them. So even small R’s mean decent heat sources. Heat warms up the magnets, and makes them no longer superconducting – and that can be a disaster (a few of these is not a problem – it happens every now and then – but a chain reaction is what caused the last September accident). So – the splices, which aren’t superconducting, need to be excellent and have almost no resistance. Like 10-15 nano-Ohms.
The Y axis is how much current you are pumping through the magnet. Current is proportional to the magnetic field, which is proportional to the energy we can run the LHC at. As you can see, if you can run at about 6700 amps you can run at 4 TeV. If you run at 8300 amps then you can run at 5 TeV.
The red and green lines are the keys to reading this plot – they are two different conditions for the state of the copper joints. The LHC machine folks always talk about the worst case scenario (the red line) – but I’m not 100% what the difference between the two is. Lets say you want to run at 5 TeV. Follow the 5 TeV line over from the left of the plot until it hits the red line. You see that it drops down to 58 nOhms. That means all splices have to be less than 58 nOhms in order to run at this energy. The machine is full of these splices. So this is a bunch of work checking these guys!! [listen to the video on the agenda page at about 30 minutes in]. So, one of the things the LHC engineers are doing is measuring all the splice resistances and then putting them up on that plot to see where they are.
BTW, nominal is 10-12 nOhms, and they need to be less than 25 to run at a full 7 TeV (two beams at 7 TeV gives you 14 TeV, the design of the LHC).
LHC News July 3, 2009Posted by gordonwatts in CERN, LHC.
Sorry if this is old news…
CERN management recently had a council meeting. These meetings take place between the council and the CERN directory general. Big funding changes, new projects, major schedule changes, a new country wants to join CERN, etc., all have to be approved by this council. As you might imagine the recent council meetings have been dominated by the “schedule changes” (I don’t actually know as a function of time if that is true, but I would imagine).
What is nice about the current CERN DG is that he usually immediately sends a message out to the public and the the CERN folks. Much better than reading about an updated CERN LHC schedule in the Geneva newspaper. Even better, a presentation to all of CERN (and an open webcast) is schedule. The last one just happened (there is a video link and slides link at the top of the agenda, just under the main agenda title).
Everyone is eager for data. I’ve discussed what I think are some of the pressures on the accelerator division previously. This meeting is a continuing part of that conversation.
It is clear they are doing a huge amount of work. They have a lot done. A huge amount. From a physicist’s point of view, the most frustrating thing about Steve Myers’ talk was there is no date and no energy. It wasn’t clear to me what the plan was until someone asked a question at the very end of the talk. Basically – they will have measured the splices (electrical connections) in all of the LHC in early August. Those splices are what caused the disaster last September – so it is important that all of them be carefully measured. And once they have measured everything – then they can start a discussion with the experiments on start up schedule and energy.
Next time something on the trade offs…
Spring in Geneva April 30, 2009Posted by gordonwatts in CERN, life.
I’ve been here in Geneva for almost a week now – and it has been all rain. For the first time today it was really sun, however. I almost missed my first meeting of the day to snap a few pictures of the amazing fields of yellow (click for much larger versions!).
And this one for dramatic effect:
2009. Ready or not January 2, 2009Posted by gordonwatts in ATLAS, CERN, D0, Fermilab, LHC, politics, science.
We’ve made it through the first day of 2009. I have mixed feelings about this coming year.
- Federal Science Funding Levels. The economy is crashing down around our ears. Business responds quickly (layoffs :() – government is a bit slower. If things followed their natural course of action that would mean science funding, along with everything else, will take yet another hit. However, the incoming Obama administration seems to be committed to spending the USA’s way out of this recession, so in the end funding might not change very much. I am hopeful that hard sciences funding will remain at least stable.
- Federal Science Funding Directions. Climate change is what the Obama administration is focused on. There is a good chance that if you are researching something connected with climate change you may have access to increased funding opportunities. I would expect a funding profile similar to NIH’s funding during its years of increase. I would like to think that funding will spill over into the physical sciences – it should because there are connections between the physical sciences and clean air technologies. All of this is applied scientific research. I hope that the pure research funding gets an increase as well, as an investment in this countries future (particle physics is pure research, of course). I’m feeling neutral here.
- Federal Science. Obama’s science team is just a BLAST of fresh air when compared to the current administration’s. After all, his DOE nominee is a Nobel prize winning experimental physicist. Even if the science advisor isn’t elevated to a cabinet position (PDF), there will be someone in the room that knows a great deal about science, research, and how it is done. Even if there are cuts to science funding, I’m very hopeful there will be intelligent cuts rather that unscientifically motivated cuts. I’m very hopeful in this respect.
- State Universities. The economy in states is depressing. Some states, like my own (Washington) that rely on sales tax are being hit hard and very fast. State universities can’t escape that, obviously, and my university is no exception. Unfortunately, this usually translates to reduced raises, inability to counter offers from outside, reduced support for research, etc. In our own department I wouldn’t be surprised if some people left for other universities that, for whatever reason, were able to make good offers in this awful climate. There is, in fact, already evidence this is happening. The only consolation is most universities are in the same boat, and so most of them are having similar problems. I know less about private universities, but I do know the endowments of many of them are also having difficulty. I’m very downbeat about this: it will be a rough two years at least, I think.
- My Science. When it comes to the Tevatron and the LHC… Well, I see no reason that the Tevatron shouldn’t continue to break records in luminosity (they just broke one earlier this week). And the experiments will continue to be flooded with data. While it is possible for one experiment or the other to have a catastrophic failure, I doubt that will happen. And they should continue to produce papers and science at a furious rate. I also am looking forward to real LHC collision data this year. While I hope it will be at the full 14 TeV, I suspect it is more likely to be at 2 TeV, just a hair above the Tevatron’s luminosity. We’ll hopefully know what the machine scientists think about that sometime in February. I’m really hopeful about this.
- New Years Resolutions. Well, I made only one. That way I have a hope of keeping it: make bread more often. I think there is a chance that I will keep this one. Especially now that I’ve said it publically.
Of course, this should also be a fun year, as noted by the Beacon News:
Frustrated with their failed attempt to destroy the world in 2008, the scientists at Fermilab and their counterparts at Switzerland’s CERN physics lab resolve to perfect their new device, the Large Planet-Sucking Black-Hole-o-Tron.
Here is to another great year of data collection and science at the Tevatron and first collision data at the LHC!