jump to navigation

Trends in Triggering: Offline to online June 5, 2015

Posted by gordonwatts in ATLAS, LHC, Trigger.
2 comments

The recent LHCC open meeting is a great place to look to see the current state of the Large Hadron Collider’s physics program. While watching the talks I had one of those moments. You know – where suddenly you realize something that you’d seen here and there isn’t just something you’d seen here and there, but that it is a trend. It was the LHCb talk that drove it home for me.

There are many reasons this is desirable, which I’ll get to in a second. but the fact that everyone is starting to do it is because it is possible. Moore’s law is at the root of this, along with the fact that we take software more seriously than we used to.

First, some context. Software in the trigger lives in a rather harsh environment. Take the LHC. Every 25 ns a new collision occurs. The trigger must decide if that collision is interesting enough to keep, or not. Interesting, of course, means cool physics like a collision that might contain a Higgs or perhaps some new exotic particle. We can only afford to save about 1000 events per second. Afford, by the way, is the right word here: each collision we wish to save must be written to disk and tape, and must be processed multiple times, spending CPU cycles. It turns out the cost of CPU cycles is the driver here.

Even with modern processors 25 ns isn’t a lot of time. As a result we tend to divide our trigger into levels. Traditionally the first level is hardware – fast and simple – and can make a decision in the first 25 ns. A second level is often a combination of specialized hardware and standard PC’s. It can take a little longer to make the decision. And the third level is usually a farm of commodity PC’s (think GRID or cloud computing). Each level gets to take a longer amount of time and make more careful calculations to make its decision. Already Moore’s law has basically eliminated Level 2. At the Tevatron DZERO had a hardward/PC Level 2; ATL:AS had a PC-only Level 2 the 2011-2012 run of ATLAS, and now even that is gone in the run that just started.

Traditionally the software that ran in the 3rd level trigger (often called a High Level Trigger, or HLT for short) were carefully optimized and custom designed algorithms. Often only a select part of the collaboration wrote these, and there were lots of coding rules involved to make sure extra CPU cycles (time) weren’t wasted. CPU is of utmost importance here, and every additional physics feature must be balanced against the CPU cost. It will find charged particle tracks, but perhaps only ones that can be quickly found (e.g. obvious ones). The ones that take a little more work – they get skipped in the trigger because it will take too much time!

Offline, on the other hand, was a different story. Offline refers to reconstruction code – this is code that runs after the data is recorded to tape. It can take its time – it can carefully reconstruct the data, looking for charged particle tracks anywhere in the detector, applying the latest calibrations, etc. This code is written with physics performance in mind, and traditionally, CPU and memory performance have been secondary (if that). Generally the best algorithms run here – if a charged particle track can be found by an algorithm, this is where that algorithm will reside. Who cares if it takes 5 seconds?

Traditionally, these two code bases have been exactly that: two code bases. But this does cause some physics problems. For example, you can have a situation where your offline code will find an object that your trigger code does not, or vice versa. And thus when it comes time to understand how much physics you’ve actually written to tape – a crucial step in measuring a particle like the Higgs, or searching for something new – the additional complication can be… painful (I speak from experience!).

Over time we’ve gotten much better at writing software. We now track performance in a way we never have before: physics, CPU, and memory are all measured on releases built every night. With modern tools we’ve discovered that… holy cow!… applying well known software practices means we can have our physics performance and CPU and memory performance too! And in the few places that just isn’t possible, there are usually easy knobs we can turn to reduce the CPU requirements. And even if we have to make a small CPU sacrifice, Moore’s law helps out and takes up the slack.

In preparation for Run 2 at the LHC ATLAS went through a major software re-design. One big effort was to more as many of the offline algorithms into the trigger as possible. This was a big job – the internal data structures had to be unified, offline algorithms’ CPU performance was examined in a way it had never been before. In the end ATLAS will have less software to maintain, and it will have (I hope) more understandable reconstruction performance when it comes to doing physics.

LHCb is doing the same thing. I’ve seen discussions about new experiments running offline and writing only that out. Air shower arrays searching for large cosmic-ray showers often do quite a bit of final processing in real-time. All of this made me think these were not isolated occurrences. I don’t think anyone has labeled this a trend yet, but I’m ready to.

By the way, this does not mean offline code and algorithms will disappear. There will always be versions of the algorithm that will use huge amounts of CPU power to get the last 10% of performance. The offline code is not run for several days after the data is taken in order to make sure the latest and greatest calibration data has been distributed. This calibration data is much more fine grained (and recent) than what is available to the trigger. Though as Moore’s law and our ability to better engineer the software improves, perhaps even this will disappear over time.

The Higgs. Whaaaa? July 6, 2012

Posted by gordonwatts in ATLAS, CMS, Higgs, LHC, physics, press.
9 comments

Ok. This post is for all my non-physics friends who have been asking me… What just happened? Why is everyone talking about this Higgs thing!?

It does what!?

Actually, two things. It gives fundamental particles mass.  Not much help, eh? Smile Fundamental particles are, well, fundamental – the most basic things in nature. We are made out of arms & legs and a few other bits. Arms & legs and everything else are made out of cells. Cells are made out of molecules. Molecules are made out of atoms. Note we’ve not reached anything fundamental yet – we can keep peeling back the layers of the onion and peer inside. Inside the atom are electrons in a cloud around the nucleus. Yes! We’ve got a first fundamental particle: the electron! Everything we’ve done up to now says it stops with the electron. There is nothing inside it. It is a fundamental particle.

We aren’t done with the nucleus yet, however. Pop that open and you’ll find protons and neutrons. Not even those guys are fundamental, however – inside each of them you’ll find quarks – about 3 of them. Two “up” quarks and a “down” quark in the case of the proton and one “up” quark and two “down” quarks in the case of the neutron. Those quarks are fundamental particles.

The Higgs interacts with the electron and the quarks and gives them mass. You could say it “generates” the mass. I’m tempted to say that without the Higgs those fundamental particles wouldn’t have mass. So, there you have it. This is one of its roles. Without this Higgs, we would not understand at all how electrons and quarks have mass, and we wouldn’t understand how to correctly calculate the mass of an atom!

Now, any physicist who has made it this far is cringing with my last statement – as a quick reading of it implies that all the mass of an atom comes from the Higgs. It turns out that we know of several different ways that mass can be “generated” – and the Higgs is just one of them. It also happens to be the only one that, up until July 4th, we didn’t have any direct proof for. An atom, a proton, etc., has contributions from more than just the Higgs – indeed, most of a proton’s mass (and hence, an atom’s mass) comes from another mechanism. But this is a technical aside. And by reading this you know more than many reporters who are talking about the story!

The Higgs plays a second role. This is a little harder to explain, and I don’t see it discussed much in the press. And, to us physicists, this feels like the really important thing. “Electro-Weak Symmetry Breaking”. Oh yeah! It comes down to this: we want to tell a coherent, unified, story from the time of the big-bang to now. The thing about the big-bang is that was *really* hot. So hot, in fact, that the rules of physics that we see directly around us don’t seem to apply. Everything was symmetric back then – it all looked the same. We have quarks and electrons now, which gives us matter – but then it was so hot that they didn’t really exist – rather, we think, some single type of particle existed. Now, and the universe cooled down from the big bang, making its way towards present day, new particles froze out – perhaps the quarks froze out first, and then the electrons, etc. Let me see how far I can push this analogy… when water freezes, it does so into ice crystals. Say that an electron was one particular shape of ice crystal and a quark was a different shape. So you go from a liquid state where everything looks the same – heck – it is just water, to a solid state where the ice crystals have some set of shapes – and by their shape they become electrons or quarks.

Ok, big deal. It seems like the present day “froze” out of the Big Bang. Well, think about it. If our current particles evolved out of some previous state, then we had sure as hell be able to describe that freezing process. Even better – we had better be able to describe that original liquid – the Big Bang. In fact, you could argue, and we definitely do, that the rules that governed physics at the big bang would have to evolve to describe the rules that describe our present day particles. They should be connected. Unified!! Ha! See how I slipped that word in up above!?

We know about four forces in the universe: the strong (holds a proton together), weak (radioactive decay is an example), electro-magnetism (cell phones, etc. are examples), and gravity. The Higgs is a key player in the unification of the weak force and the electro-magnetic force. Finding it means we actually have a bead on how nature unifies those two forces. That is HUGE! This is a big step along the way to putting all the forces back together. We still have a lot of work to do!

Another technical aside. Smile We think of the first role – giving fundamental particles mass – a consequence of the second – they are not independent roles. The Higgs is key to the unification and in order to be that key, it must also be the source of the fundamental particle’s mass.

How long have you been searching for it?

A loooooong time. We are like archeologists. Nature is what nature is. Our job is to figure out how nature works. We have a mathematical model (called the Standard Model). We change it every time we find an experimental result that doesn’t agree with the calculation. The last time that happened was when we stumble upon the unexpected fact that neutrino’s have mass. The time before that was the addition of the Higgs, and that modification was first proposed in 1964 (it took a few years to become generally accepted). So, I suppose you could say in some sense we’ve been looking for it since 1964!

It isn’t until recently, however (say in the late 90’s) that the machines we use have become powerful enough that we could honestly say we were “in the hunt for the Higgs.” The LHC, actually, had finding the Higgs as one of its major physics goals. There was no guarantee – no reason nature had to work like that – so when we built it we were all a little nervous and excited… ok. a lot nervous and excited.

So, why did it take so long!? The main reason is we hardly ever make it in our accelerators! It is very very massive!! So it is very hard to make. Even at the LHC we make one every 3 hours… The LHC works by colliding protons together at a very high speed (almost the speed of light). We do that more than 1,000,000 times a second… and we make a Higgs only once every 3 hours. The very definition of “needle in a haystack!”

Who made this discovery?

Two very large teams of physicists, and a whole bunch of people running the LHC accelerator at CERN. The two teams are the two experiments: ATLAS and CMS. I and my colleagues at UW are on ATLAS. If you hear someone say “I discovered the Higgs” they are using the royal-I. This is big science. Heck – the detector is half a (American) football field long, and about 8 or 9 stories tall and wide. This is the sort of work that is done by lots of people and countries working together. ATLAS currently has people from 38 countries – the USA being one of them.

What does a Cocktail Party have to do with it?

The cocktail party analogy is the answer to why some fundamental particles are more massive than particles (sadly, not why I have to keep letting my belt out year-after-year).

This is a cartoon of a cocktail party. Someone very famous has just entered the room. Note how everyone has clumped around them! If they are trying to get to the other side of the room, they are just not going to get there very fast!!

Now, lets say I enter the room. I don’t know that many people, so while some friends will come up and talk to me, it will be nothing like that famous person. So I will be able to get across the room very quickly.

The fact that I can move quickly because I interact with few people means I have little mass. The famous person has lots of interactions and can’t move quickly – and in this analogy they have lots of mass.

Ok. Bringing it back to the Higgs. The party and the people – that is the Higgs field. How much a particle interacts with the Higgs field determines its mass. The more it interacts, the more mass is “generated.”

And that is the analogy. You’ve been reading a long time. Isn’t this making you thirsty? Go get a drink!

Really, is this that big a deal?

Yes. This is a huge piece of the puzzle. This work is definitely worth a Nobel prize – look for them to award one to the people that first proposed it in 1960 (there are 6 of them, one has passed away – no idea how the committee will sort out the max of 3 they can give it to). We have confirmed a major piece of how nature works. In fact, this was the one particle that the Standard Model predicted that we hadn’t found. We’d gotten all the rest! We now have a complete picture of the Standard Model is it is time to start work on extending the Standard Model. For example, dark matter and dark energy are not yet in the Standard Model. We have no figured out how to fully unify everything we know about.

No. The economy won’t see an up-tick or a down-tick because of this. This is pure research – we do it to understand how nature and the universe around us works. There are sometimes, by-luck, spin-offs. And there are people that work with us who take it on as one of their tasks to find spin offs. But that isn’t the reason we do this.

What is next?

Ok. You had to ask that. So… First, we are sure we have found a new boson, but the real world – and data, is a bit messy. We have looked for it, and expect it to appear in several different places. It appeared in most of them – one place it seems to be playing hide and seek (where the Higgs decays to tau’s – a tau is very much like a heavy electron). Now, only one of the two experiments has presented results in the tau’s (CMS), so we have to wait for my experiment, ATLAS, to present its results before we get worried.

Second, and this is what we’d be doing no matter what happened to the tau’s, is… HEY! We have a shiny new particle! We are going to spend some years looking at it from every single angle possible, taking it out for a test drive, you know – kicking the tires. There is actually a scientific point to doing that – there are other possible theories out there that predict the existence of a Higgs that looks exactly like the Standard Model Higgs except for some subtle differences. So we will be looking at this new Higgs every-which way to see if we can see any of those subtle differences.

ATLAS and CMS also do a huge amount of other types of physics – none of which we are talking about right now – and we will continue working on those as well.

Why do you call it the God Particle!?

We don’t. (especially check out the Pulp Fiction mash-up picture).

What will you all discover next?

I’ll get back to you on that…

Whew. I’m spent!

Boom! March 30, 2010

Posted by gordonwatts in CERN, LHC.
3 comments

Do not start with a whimper… Start with a…

atlas2010-vp1-152166-639756[1]

or perhaps you’d prefer a…

atlas2010-vp1-152166-399473[1]

That first event – the small red track (click to enlarge) is actually a muon candidate. Something we almost could never see with 900 GeV collisions from last December – very little was powerful enough to make it out that far.

So, now the real work begins. Soon the press will pack up and we can get down to actually making sense of this fantastic new microscope we’ve been given! It is going to be a fun 18 months of first data!

For more event pictures, from all the experiments, not just ATLAS, see the main CERN web page.

Collisions March 30, 2010

Posted by gordonwatts in CERN, LHC.
3 comments

Wow. It is almost 5 am. I have a meeting in 3 hours. It is a bit anti-climatic watching it alone in the dark here in Seattle. But still. This is the beginning. The next year will have many more sleepless nights. A job well done by everyone who has worked for so long to see these first collisions – many for 20 years or more!

What do you mean it isn’t about the $$? December 16, 2009

Posted by gordonwatts in ATLAS, CERN, LHC, life.
3 comments

A cute article in Vanity Fair:

Among the defining attributes of now are ever tinier gadgets, ever shorter attention spans, and the privileging of marketplace values above all. Life is manically parceled into financial quarters, three-minute YouTube videos, 140-character tweets. In my pocket is a phone/computer/camera/video recorder/TV/stereo system half the size of a pack of Marlboros. And what about pursuing knowledge purely for its own sake, without any real thought of, um, monetizing it? Cute.

Something I found out from this article – The LHC is the largest machine ever built. Ok. Wow. Ever!? I would have though that something like a large air craft carrier would have beat this. Still.

The attention span is another interesting aspect I’d not thought about. You know that the first space shuttles were using magnetic core memory (see the reference in that Wikipedia particle). There were a number of reasons for this – one of them was certainly there was no better technology available when they started. Before it was built more robust memory appeared – but it was too late to redesign. Later space shuttles were fitted with more modern versions of the memory.

In internet time, 6 months or a year and you are already a version behind. And it matters. It would seem part of the point of the now is to be using the latest and greatest. You know how everyone stands around a water cooler discussing the latest episode of some TV show (i.e. Lost when it first started). Now it is the latest iPhone update or some other cool new gadget. Ops. Hee hee. I said water cooler. How quaint. Obviously, I meant facebook.

Projects like the space shuttle or the LHC take years and years. And a lot of people have to remain focused for that long. And governments who provide the funding. You know how hard that is – especially for a place like the USA where every year they discuss the budget? It is hard. Some people have been working on this for 20 years. 20 years! And now data is finally arriving. Think about that: designs set down 20 years ago have finally been built and installed and integrated and tested.

This science does not operate on internet time. But we are now deep in the age of internet time. How will the next big project fair? Will we as a society have the commitment to get it done?

I like the writing style in this VF article – a cultural look at the LHC. They do a good job of describing the quench as well. I recommend the read. And, finally, yes, this post ended up very different from the way it started. 🙂

Thanks to Chris @ UW for bringing this article to my attention.

First LHC Collisions at ATLAS November 23, 2009

Posted by gordonwatts in ATLAS, LHC.
3 comments

If you follow newspapers, facebook, or twitter, you’ve undoubtedly seen these already – but the LHC has done it – managed to collide two beams of protons! They never made it that far last year. Here is an event from ATLAS:

atlas2009-collision-vp1-140541-171897-new[1]

That isn’t to say there is a lot of work left to do. These are at an energy 900 GeV, which is much less than the 7,000 GeV energy they plan to get up to by the end of running in 2010. And the beams are not very intense yet. Still!!!

I’m currently in Seattle – I wish I could have been there for this. Being in or around the control room – though I would have been mostly in the way for this phase. Unlike at the Tevatron, I wasn’t really responsible for any bit of the detector or DAQ at ATLAS – and those are the people that need to be there right now. Still, I would have loved to have been there.

Ironically, I first heard about these collisions from facebook. People in the control room that I’m friends with were posting status updates as LHC tuned up their beam. Press release, twitter, etc., all lagged behind that. And of the people I’m friends with a theorist posted the news second (!) – this theorist was not the member of any collaboration. Ahhh… new media! 😉

So, taking that sentiment to the limit. I must now ignore the LHC and get back to preparing for class! Must. Not. Look. At. Accelerator. Status. <said in best Cptn’ Kirk voice>.

(more events will be posted here as they show up).

Fizzle! August 4, 2009

Posted by gordonwatts in ATLAS, Fermilab, LHC, Tenure, university.
5 comments

The biggest, most expensive physics machine in the world is riddled with thousands of bad electrical connections.

Ouch.

So starts a mostly accurate article in the New York Times about the current state of the LHC. There is good news and bad news in this sentence. To paraphrase a famous politician currently sight-seeing north of South Korea, it really depends on your definition of the word bad. To most people, if someone says that the electrical connection between your light and the wall socket is bad, then that means your light won’t work. That is the normal definition of bad. We High Energy Physicists have a different definition of bad. 🙂

For us, bad means that the connection isn’t going to conduct as much current as it could (I had a blog post about this a while back – but this article contains an excellent explanation – well worth registering if you have to to read it). And this is the reason behind the timing of this article. As I mentioned in that article it would not be until the beginning of August that the LHC group of scientists would have finished measuring all those connections – all those splices – and know exactly how bad they were. Tomorrow the LHC and CERN will announce exactly what energy they will run the LHC at initially.

But scientists say it could be years, if ever, before the collider runs at full strength, stretching out the time it should take to achieve the collider’s main goals…

And that is the bad part of the news. The bad connections mean that we can’t run at the full 14 TeV energy – we will run something short of that (I’m betting it will be 7.5 TeV – if I get it right it isn’t because I have inside information from the accelerator group!). The article is correct that running at this reduced energy won’t give us the access to the science we’d all expected and hoped for if we were running at 14 TeV.

But another thing to keep in mind is: we need data. Any data. And not to discover something new – because we need to tune up and commission our detectors! We’ve never run these things in anything but a simulated collider environment or looking for cosmic rays. We would probably be able to keep ourselves busy for almost a year with two months of data.

Peter Limon, a physicist from Fermilab got it right:

“These are baby problems,” said Peter Limon, a physicist at the Fermi National Accelerator Laboratory in Batavia, Ill., who helped build the collider.

Indeed, these are birthing problems – no one has ever run a machine like this before. Which brings me to the one spot in the article that got my hackles up:

“I’ve waited 15 years,” said Nima Arkani-Hamed, a leading particle theorist at the Institute for Advanced Study in Princeton. “I want it to get up running. We can’t tolerate another disaster. It has to run smoothly from now.”

Nima, whom I also know (and like), is a theorist. If an experimentalist said this we would all make them run outside turn around three times, and spit to the north to cancel the jinx they would have just placed on the machine. I think we can all guarantee that there are going to be other failures and problems that occur. We hope none of them are as bad as this last one. But if they are, we will do exactly what we’ve done up to now: pick up the bits, study them, figure out exactly what we did wrong, and then fix it better than it was originally made, and try again.

There was one last quote in that article I would have liked to have seen more of a back story to:

Some physicists are deserting the European project, at least temporarily, to work at a smaller, rival machine across the ocean.

The story behind this is fascinating because it is where science meets humanity. The machine across the ocean is the Tevatron at Fermilab (I’m on one of the experiments there, DZERO). There is plenty of science still there, and the race for the Higgs is very much alive – more so with each delay in the LHC. So scientifically it is attractive. But, there is also the fact that a graduate student in the USA must use real data in their thesis. Thus the delays in the LHC mean that it will take longer and longer for the graduate students to graduate. In the ATLAS LHC experiment the canonical number of graduate students quoted I hear is about 800. Think of that – 800 Ph.D.’s all getting ready to graduate – about 1/3rd or more of them waiting for the first data (talk about a “big bang”). Unfortunately, you can’t be a graduate student forever – so at some point the LHC is taking long enough and you have to move back to the USA in order to get a timely thesis. Similar pressures exist for post-docs and professors trying to get tenure.

UPDATE: Just announced earlier today: they will start with 3.5×3.5 – that is, 7 TeV center of mass. This is exactly half the design energy of the LHC. The hope is that if all runs well at that energy they can slowly ramp up to 4×5 or 8 TeV. At 8 things start to get interesting as a decent amount of data at 8 will provide access to things that the Fermilab Tevatron can’t. Fingers crossed all goes well!

Energy vs Power vs Heat vs Oh no! July 5, 2009

Posted by gordonwatts in CERN, LHC.
2 comments

Last post I mentioned the LHC update that was given at a recent meeting at CERN. One cool thing Steve Myers’ showed during his talk was a discussion of the quality of the splices and how it might affect the LHC’s ability to run.

For a sample of the trade-off, check out this plot, stolen from page 46 in the talk.

image_thumb[1]

Along the x axis is the measured resistance between two magnets (across the splice). The units there are nano-ohms – something only the most expensive multimeters can measure. If you remember your Physics 101 course, you remember P=I^2R (power is current squared times resistance). The units of P are Watts (!) – just like your light bulb. These are superconducting magnets, of course. The magnets are very powerful and so have 1000’s of amps of current flowing through them. So even small R’s mean decent heat sources. Heat warms up the magnets, and makes them no longer superconducting – and that can be a disaster (a few of these is not a problem – it happens every now and then – but a chain reaction is what caused the last September accident). So – the splices, which aren’t superconducting, need to be excellent and have almost no resistance. Like 10-15 nano-Ohms.

The Y axis is how much current you are pumping through the magnet. Current is proportional to the magnetic field, which is proportional to the energy we can run the LHC at. As you can see, if you can run at about 6700 amps you can run at 4 TeV. If you run at 8300 amps then you can run at 5 TeV.

The red and green lines are the keys to reading this plot – they are two different conditions for the state of the copper joints. The LHC machine folks always talk about the worst case scenario (the red line) – but I’m not 100% what the difference between the two is. Lets say you want to run at 5 TeV. Follow the 5 TeV line over from the left of the plot until it hits the red line. You see that it drops down to 58 nOhms. That means all splices have to be less than 58 nOhms in order to run at this energy. The machine is full of these splices. So this is a bunch of work checking these guys!! [listen to the video on the agenda page at about 30 minutes in]. So, one of the things the LHC engineers are doing is measuring all the splice resistances and then putting them up on that plot to see where they are.

BTW, nominal is 10-12 nOhms, and they need to be less than 25 to run at a full 7 TeV (two beams at 7 TeV gives you 14 TeV, the design of the LHC).

LHC News July 3, 2009

Posted by gordonwatts in CERN, LHC.
2 comments

Sorry if this is old news…

CERN management recently had a council meeting. These meetings take place between the council and the CERN directory general. Big funding changes, new projects, major schedule changes, a new country wants to join CERN, etc., all have to be approved by this council. As you might imagine the recent council meetings have been dominated by the “schedule changes” (I don’t actually know as a function of time if that is true, but I would imagine).

What is nice about the current CERN DG is that he usually immediately sends a message out to the public and the the CERN folks. Much better than reading about an updated CERN LHC schedule in the Geneva newspaper. Even better, a presentation to all of CERN (and an open webcast) is schedule. The last one just happened (there is a video link and slides link at the top of the agenda, just under the main agenda title).

Everyone is eager for data. I’ve discussed what I think are some of the pressures on the accelerator division previously. This meeting is a continuing part of that conversation.

It is clear they are doing a huge amount of work. They have a lot done. A huge amount. From a physicist’s point of view, the most frustrating thing about Steve Myers’ talk was there is no date and no energy. It wasn’t clear to me what the plan was until someone asked a question at the very end of the talk. Basically – they will have measured the splices (electrical connections) in all of the LHC in early August. Those splices are what caused the disaster last September – so it is important that all of them be carefully measured. And once they have measured everything – then they can start a discussion with the experiments on start up schedule and energy.

Next time something on the trade offs…

Da Vinci’s Take on the LHC February 9, 2009

Posted by gordonwatts in LHC, physics life.
3 comments

This is old news, but I stumbled across this for the first time. Check out this drawing:

PWnew2_11-08[1]

That is the CMS detector, taken apart. Stunning, huh?

Obviously, Da Vinci didn’t draw that – rather a member of the CMS collaboration, Sergio Cittolin, did. He is the project leader for the trigger and data acquisition systems for CMS. Apparently they are on the cover of the CMS physics Technical Design Reports (TDRs). Sadly, as I have only the electronic version, I never caught this! The drawings are beautiful. I want some large poster size ones to hang up outside my office at UW!

I found this in a Physics World article.