jump to navigation

Christmas Project December 28, 2017

Posted by gordonwatts in Uncategorized.
add a comment

Every Christmas I try to do some sort of project. Something new. Sometimes it turns into something real, and last for years. Sometimes it goes no where. Normally, I have an idea of what I’m going to attempt – usually it has been bugging me for months and I can’t wait till break to get it started. This year, I had none.

But, I arrived home at my parent’s house in New Jersey and there it was waiting for me. The house is old – more 200 yrs old – and the steam furnace had just been replaced. For those of you unfamiliar with this method of heating a house: it is noisy! The furnace boils water, and the steam is forced up through the pipes to cast iron radiators. The radiators hiss through valves as the air is forced up – an iconic sound from my childhood. Eventually, after traveling sometimes four floors, the super hot steam reaches the end of a radiator and the valve shuts off. The valves are cool – heat sensitive! The radiator, full of hot steam, then warms the room – and rather effectively.

The bane of this system, however, is that it can leak. And you have no idea where the leak is in the whole house! The only way you know: the furnace reservoir needs refilling too often. So… the problem: how to detect the reservoir needs refilling? Especially with this new modern furnace which can automatically refill its resevoir.

Me: Oh, look, there is a little LED that comes on when the automatic refilling system comes on! I can watch that! Dad: Oh, look, there is a little light that comes on when the water level is low. We can watch that.

Dad’s choice of tools: a wifi cam that is triggered by noise. Me: A Raspberry Pi 3, a photo-resistor, and a capacitor. Hahahaha. Game on!

IMG_20171227_030002What’s funny? Neither of us have detected a water-refill since we started this project. The first picture at the right you can see both of our devices – in the foreground taped to the gas input line is the CAM watching the water refill light through a mirror, and in the background (look for the yellow tape) is the Pi taped to the refill controller (and the capacitor and sensor hanging down looking at the LED on the bottom of the box).

I chose the Pi because I’ve used it once before – for a Spotify end-point. But never for anything that it is designed for. An Arduino is almost certainly better suited to this – but I wasn’t confident that I could get it up and running in the 3 days I had to make this (including time for ordering and shipping of all parts from Amazon). It was a lot of fun! And consumed a bunch of time. “Hey, where is Gordon? He needs to come for Christmas dinner!” “Wait, are you working on Christmas day?” – for once I could answer that last one with a honest no! Hahaha. Smile

I learned a bunch:

  • I had to solder! It has been a loooong time since I’ve done that. My first graduate student, whom I made learn how to solder before I let him graduate, would have laughed at how rusty my skills were!
  • I was surprised to learn, at the start, that the Pi has no analog to digital converter. I stole a quick and dirty trick that lots of people have used to get around this problem: time how long it takes to charge a capacitor up with a photoresistor. This is probably the biggest source of noise in my system, but does for crude measurements.
  • I got to write all my code in Python. Even interrupt handling (ok, no call backs, but still!)
  • The Pi, by default, runs a full build of Linux. Also, python 3! I made full use of this – all my code is in python, and a bit in bash to help it get going. I used things like cron and pip – they were either there, or trivial to install. Really, for this project, I was never consious of the Pi being anything less than a full computer.
  • At first I tried to write auto detection code – that would see any changes in the light levels and write them to a file… which was then served on a nginx simple webserver (seriously – that was about 2 lines of code to install). But the noise in the system plus the fact that we’ve not had a fill so I don’t know what my signal looks like yet… So, that code will have to be revised.
  • In the end, I have to write a file with the raw data in it, and analyze that – at least, until I know what an actual signal looks like. So… how to get that data off the Pi – especially given that I can’t access it anymore now that I’ve left New Jersey? In the end I used some Python code to push the files to OneDrive. Other than figuring out how to deal with OAuth2, it was really easy (and I’m still not done fighting the authentication battle). What will happen if/when it fails? Well… I’ve recorded the commands my Dad will have to execute to get the new authentication files down there. Hopefully there isn’t going to be an expiration!
  • imageTo analyze the raw data I’ve used a new tool I’ve recently learned at work: numpy and Jupyter notebooks. They allow me to produce a plot like this one. The dip near the left hand side of the plot is my Dad shining the flashlight at my sensors to see if I could actually see anything. The joker.

Pretty much the only thing I’d used before was Linux, and some very simple things with an older Raspberry Pi 2. If anyone is on the fence about this – I’d definately recommend trying it out. It is very easy and there are 1000’s of web pages with step by step instructions for most things you’ll want to do!

    Education and “Internet Time” June 30, 2015

    Posted by gordonwatts in Teaching, university.
    add a comment

    I saw this link on techcrunch go by discussing the state of venture capital in the education sector. There is a general feeling, at least in the article, that when dealing with universities that things are not moving at internet speed:

    “The challenge is, in general, education is a pretty slow to move category, particularly if you’re trying to sell into schools and universities … In many cases they don’t seem to show the sense of urgency that the corporate world does.” says Steve Murray, a partner with Softbank Capital, and investor in the education technology company, EdCast.

    I had to laugh a bit. Duh. MOOC’s are a classic example. Massively Open Online Courses – a way to educate large numbers of people with a very small staff. The article refers to the problems with this, actually:

    The first generation of massively open online courses have had (well-documented) problems with user retention.

    So why have universities been so slow to just jump into the latest and greatest education technology? Can you imagine sending your kid to get a degree from the University of Washington, where they are trying out some new way of education that, frankly, fails on university scale? We are a publically funded university. We’d be shut! The press, rightly, would eat us alive. No institution is going to jump before they look and move their core business over to something that hasn’t been proven.

    Another way to look at this, perhaps, is that each University has a brand to maintain. Ok, I’m not a business person here, so I probably am not using the word in quite the right way. None the less. My department at the University of Washington, the Physics Department, is constantly looking at the undergraduate curricula. We are, in some sense, driven by the question “What does it mean to have a degree from the University of Washington Physics Department?” or “What physics should they know?” or another flavor: “They should be able to explain and calculate X by the time they are awarded the degree.” There is a committee in the department that is responsible for adjusting the courses and material covered, and they are constantly proposing changes.

    So far only certain technological solutions have an obvious “value proposition.” For example, the online homework websites. This enables students to practice problems without having to spend a huge amount of money on people who will do the grading of the exams. Learning Management Systems, like Canvas, allows us to quickly setup a website for the course that includes just about everything we need as teachers, saving us bunch of time.

    Those examples make teaching cheaper and more efficient. But that isn’t always the case. Research (yes, research!!!) has shown that students learn better when they are actively working on a problem (in groups of peers is even more powerful) – so we can flip the class room: have them watch lectures on video and during traditional lecture time work in groups. To do it right, you need to redesign the room… which costs $$… And the professor now has to spend extra time recording the lectures. So there is innovation – and it is helping students learn better.

    I think most of us in education will happily admit to the fact that there are inefficiencies in the education system – but really big ones? The problem with the idea that there are really big inefficiencies is that no one has really shown how to educate people on the scale of a University in a dramatically cheaper way. As soon as that happens the inefficiencies will become obvious along with the approach to “fix” them. There are things we need to focus on doing better, and there are places that seem like they are big inefficiencies… and MOOC’s will have a second generation to address their problems. And all of us will watch the evolution, and some professors will work with the companies to improve their products… but it isn’t going to happen overnight, and it isn’t obvious to me that it will happen at all, at least not for the bulk of students.

    Education is labor intensive. In order to learn the student has to put in serious time. And as long this remains the case, we will be grappling with costs.

    Trends in Triggering: Offline to online June 5, 2015

    Posted by gordonwatts in ATLAS, LHC, Trigger.
    2 comments

    The recent LHCC open meeting is a great place to look to see the current state of the Large Hadron Collider’s physics program. While watching the talks I had one of those moments. You know – where suddenly you realize something that you’d seen here and there isn’t just something you’d seen here and there, but that it is a trend. It was the LHCb talk that drove it home for me.

    There are many reasons this is desirable, which I’ll get to in a second. but the fact that everyone is starting to do it is because it is possible. Moore’s law is at the root of this, along with the fact that we take software more seriously than we used to.

    First, some context. Software in the trigger lives in a rather harsh environment. Take the LHC. Every 25 ns a new collision occurs. The trigger must decide if that collision is interesting enough to keep, or not. Interesting, of course, means cool physics like a collision that might contain a Higgs or perhaps some new exotic particle. We can only afford to save about 1000 events per second. Afford, by the way, is the right word here: each collision we wish to save must be written to disk and tape, and must be processed multiple times, spending CPU cycles. It turns out the cost of CPU cycles is the driver here.

    Even with modern processors 25 ns isn’t a lot of time. As a result we tend to divide our trigger into levels. Traditionally the first level is hardware – fast and simple – and can make a decision in the first 25 ns. A second level is often a combination of specialized hardware and standard PC’s. It can take a little longer to make the decision. And the third level is usually a farm of commodity PC’s (think GRID or cloud computing). Each level gets to take a longer amount of time and make more careful calculations to make its decision. Already Moore’s law has basically eliminated Level 2. At the Tevatron DZERO had a hardward/PC Level 2; ATL:AS had a PC-only Level 2 the 2011-2012 run of ATLAS, and now even that is gone in the run that just started.

    Traditionally the software that ran in the 3rd level trigger (often called a High Level Trigger, or HLT for short) were carefully optimized and custom designed algorithms. Often only a select part of the collaboration wrote these, and there were lots of coding rules involved to make sure extra CPU cycles (time) weren’t wasted. CPU is of utmost importance here, and every additional physics feature must be balanced against the CPU cost. It will find charged particle tracks, but perhaps only ones that can be quickly found (e.g. obvious ones). The ones that take a little more work – they get skipped in the trigger because it will take too much time!

    Offline, on the other hand, was a different story. Offline refers to reconstruction code – this is code that runs after the data is recorded to tape. It can take its time – it can carefully reconstruct the data, looking for charged particle tracks anywhere in the detector, applying the latest calibrations, etc. This code is written with physics performance in mind, and traditionally, CPU and memory performance have been secondary (if that). Generally the best algorithms run here – if a charged particle track can be found by an algorithm, this is where that algorithm will reside. Who cares if it takes 5 seconds?

    Traditionally, these two code bases have been exactly that: two code bases. But this does cause some physics problems. For example, you can have a situation where your offline code will find an object that your trigger code does not, or vice versa. And thus when it comes time to understand how much physics you’ve actually written to tape – a crucial step in measuring a particle like the Higgs, or searching for something new – the additional complication can be… painful (I speak from experience!).

    Over time we’ve gotten much better at writing software. We now track performance in a way we never have before: physics, CPU, and memory are all measured on releases built every night. With modern tools we’ve discovered that… holy cow!… applying well known software practices means we can have our physics performance and CPU and memory performance too! And in the few places that just isn’t possible, there are usually easy knobs we can turn to reduce the CPU requirements. And even if we have to make a small CPU sacrifice, Moore’s law helps out and takes up the slack.

    In preparation for Run 2 at the LHC ATLAS went through a major software re-design. One big effort was to more as many of the offline algorithms into the trigger as possible. This was a big job – the internal data structures had to be unified, offline algorithms’ CPU performance was examined in a way it had never been before. In the end ATLAS will have less software to maintain, and it will have (I hope) more understandable reconstruction performance when it comes to doing physics.

    LHCb is doing the same thing. I’ve seen discussions about new experiments running offline and writing only that out. Air shower arrays searching for large cosmic-ray showers often do quite a bit of final processing in real-time. All of this made me think these were not isolated occurrences. I don’t think anyone has labeled this a trend yet, but I’m ready to.

    By the way, this does not mean offline code and algorithms will disappear. There will always be versions of the algorithm that will use huge amounts of CPU power to get the last 10% of performance. The offline code is not run for several days after the data is taken in order to make sure the latest and greatest calibration data has been distributed. This calibration data is much more fine grained (and recent) than what is available to the trigger. Though as Moore’s law and our ability to better engineer the software improves, perhaps even this will disappear over time.

    Really? Is it that different? May 11, 2015

    Posted by gordonwatts in life, university.
    2 comments

    An article from the New York Times is making its rounds on various social media circles I’m a member of, “What is the Point of a Professor?” It has lots of sucker-punch quotes, like

    But as this unique chapter of life closes and they reflect on campus events, one primary part of higher education will fall low on the ladder of meaningful contacts: the professors.

    Or this one:

    In one national survey, 61 percent of students said that professors frequently treated them “like a colleague/peer,” while only 8 percent heard frequent “negative feedback about their academic work.” More than half leave the graduation ceremony believing that they are “well prepared” in speaking, writing, critical thinking and decision-making.

    Obviously implicit is that they aren’t well prepared! This is from an op-ed bit written by Mark Bauerlein, a professor at Emory. He also authored a book titled (which I have not read):

    “The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future (or, Don’t Trust Anyone Under 30).”

    You can probably already tell this has pissed me off. Smile

    This sort of hatchet job of a critique of university students gets it part-right, but, I think, really misses the point. Sorting through the article and trying to pull out a central idea that he wants all professors to adopt, I came away with this quote:

    Since the early 2000s, I have made students visit my office every other week with a rough draft of an essay. We appraise and revise the prose, sentence by sentence. I ask for a clearer idea or a better verb; I circle a misplaced modifier and wait as they make the fix.

    This one-on-one interaction he stresses as the cure for all the ills he has outlined. Let me just say that if I were devote this much time to each of my students I’d still be single. In the modern day and age of universities and professor’s lives (and jobs), there just isn’t time! Too many people want a university education, and there just isn’t enough money in the education system to fun this sort of interaction (and it is getting worse in many of the national largest publics).

    But…!!

    But, frankly, if I look at my life and my work, it doesn’t seem that bad. I’m constantly mentoring undergraduates and graduate students. He claims that professors who do research don’t want interaction with their students because it detracts from their research… I doubt it is any different in English than it is in Physics – but that interaction is pretty much the only way I can get good undergraduates to start working with me! And I’m far from alone at the University of Washington.

    The two views (I’m doing plenty of mentoring and his that there isn’t enough contact) are compatible: student/professor ratios are an easy explanation. But that isn’t everything – my students are not the same sort of student I was. This quote really irked me as being rather arrogant:

    Naturally, students looked to professors for moral and worldly understanding.

    Wow. I don’t think he has met most of my students! By the time they get to me they have a pretty good understanding of how the world works. I can help guide them though quantum mechanics and the philosophical questions that raises, but the internet and their friend groups are much stronger influences than I am for everything else!

    His book title also makes me think he has missed everything that the new digital age has to offer. It feels like the constant discussion I have when organizing a conference: should we turn off wifi in the conference room and force everyone to listen to the talks, or leave it on? I see benefits and detriments to both – but you can’t hold back progress. Especially as the younger generations grow up and start attending conferences this will not be an option. And they and forward conference organizers will find ways to use it to the attendee’s benefit – the same way I hope it will happen in classrooms. I should say as a caveat, I don’t know anyone has universally cracked that nut yet!

    In short:

    • He is right, in large classes can undermine the interaction between students and professors. Blame lies not just with the professors as his article implies here.
    • There is a lot of interaction going on none-the-less. Taking advantage of electronic communication, not just in-person.
    • Undergraduates learn at a university from many sources (e.g. the internet, social groups/media, etc.) in a way they didn’t a generation ago. This is good, not bad.
    • The kids are better than he seems to be giving them credit for. Smile

    Edit: I originally saw this post in my fb feed, and my friend Salvatore Rappoccio had a fantastic response. It was private at the time, but now that he has made his reply to the article public”":

    What? I can’t hear you over the four undergrad students I’m sending to Fermilab for the summer or the two undergrads per semester I’ve mentored for three years. If you want to chat you’ll have to take a number behind the 20-ish students per semester I sit down with for philosophical discussions or career advice outside of my office hours. I have, in the last semester, discussed physics, career choices, fatherhood, kerbal space program, and drywalling with a 3-tour vet, a guy working full time as a contractor to put himself through school, an electrician going back to school for engineering, and a student practically in tears that I bothered to tell her that she improved a lot over the semester, just to name the most memorable ones.

    So What’s the point of a professor, you ask?

    To educate, obviously. And not just in the classroom. Maybe it’s just you who falls into the “useless” category.

    Pi Day–We should do it more! March 15, 2015

    Posted by gordonwatts in ATLAS, Outreach, physics life.
    add a comment

    WP_20150314_003

    Today was Pi day. To join in the festivities, here in Marseille, I took my kid to the Pi-day exhibit at MuCEM, the new fancy museum they built in 2013 here in Marseille. It was packed. The room was on the top floor, and it was packed with people (sorry for the poor quality of the photo, my cell phone doesn’t handle the sun pouring in the windows well!). It was full of tables with various activities all having to do with mathematics. Puzzles and games that ranged from logic to group theory. It was very well done, and the students were enthusiastic and very helpful. They really wanted nothing more than to be here on a Saturday with this huge crowd of people. For the 45 minutes we were exploring everyone seemed to be having a good time.

    And when I say packed, I really do mean packed. When we left the fire marshals had arrived, and were carefully counting people. The folks (all students from nearby universities) were carefully making sure that only one person went in for everyone that went out.

    Each time I go to one of these things or participate in one of these things I’m reminded how much the public likes it. The Particle Fever movie is an obvious recent really big example. It was shown over here in Marseille in a theater for the first time about 6 months ago. The theater sold out! This was not uncommon back in the USA (though sometimes smaller audiences happened as well!). The staging was genius: the creator of the movie is a fellow physicist and each time a town would do a showing, he would get in contact with some of his friends to do Q&A after the movie.

    Another big one I helped put together was the Higgs announcement on July 3, 2012, in Seattle. There were some 6 of us. It started at midnight and went on till 2 am (closing time). At midnight, on a Tuesday night, there were close to 200 people there! We’d basically packed the bar. The bar had to kick us out as people were peppering us with questions as we were trying to leave before closing. It was a lot of fun for us, and it looked like a lot of fun for everyone else that attended.

    I remember the planning stages for that clearly. We had contingency plans in case no one showed up. Or how to alter our presentation if there were only 5 people. I think we were opening for about 40 or so. And almost 200 showed up. I think most of us did not think the public was interested. This attitude is pretty common – why would they care about the work we do is a common theme in conversations about outreach. And it is demonstrably wrong. Smile

    The lesson for people in these fields: people want to know about this stuff! And we should figure out how to do these public outreach events more often. Some cost a lot and are years in the making (e.g. the movie Particle Fever), but others are easy. For example – Science Café’s around the USA.

    And in more different ways. For example, some friends of mine have come up with a neat way of looking for cosmic rays – using your cell phones (most interesting conversation on this project can be found on twitter). What a great way to get everyone involved!

    And there are selfish reasons for us to do these things! A lot of funding for science comes from various governments agencies in the USA and around the world (be it local or federal), and the more of the public knows what is being done with their tax dollars, and what interesting results are being produced, the better. Sure, there are people who will never be convinced, but there are also a lot that will become even more enthusiastic.

    So… what are your next plans for an outreach project?

    Big Time University Presidents February 5, 2015

    Posted by gordonwatts in university.
    2 comments

    I was going to write about something else this month, but this really got to me. My university, the University of Washington, just lost its president, Michael Young, to Texas A&M. I have issues at many levels with this. First, on the trivial side, I went to the University of Texas at Austin. Big football rivalry… they would catch our mascot and brand him, we would… well. You get the idea.

    First let me say that I like the guy. And I think what he has been doing at the UW is generally the right thing. He has seems effective, and the state legislature seems to like him (amazingly important for a State school). I have no real problems with him as president at UW. In fact, I wish he would have continued. He as been quoted in papers as saying that money is the reason he is leaving – but it is the size of the funding for Texas A&M, not his salary. The Texas schools are remarkably well funded for public institutions, btw. This is probably true. But still…

    But there are several things that really get to me about this move. First, he just arrived. He was settled on as a president of UW in April of 2011, and started the following summer/fall. The University of Washington is like a giant container ship: if you are going to turn it, it will take many years to do it! I don’t think he really has been at UW long enough to effect any real change. Course adjustments, and some ground work for a new directions, perhaps. But that is all. We’ve had a revolving door of presidents recently – no one staying for very long. Public education is going through a very bumpy time right now – it seems that in general the public is divesting itself, and navigating the politics of establishing a new relationship with the state requires a relationship with the state legislature. Beyond that, there is just the idea that UW needs to settle down and concentrate on doing things, rather than being constantly distracted by searches like this. I believe we are the largest recipient of federal grants for research of any public university in the United States. We should be concentrating on teaching and research (and our hospital), not distracted by another year long search for a new president. As a result I find this sudden departure frustrating on an intellectual level.

    But I find this frustrating on a more emotional level as well. It took me a day or two to figure this out. But these presidents that show up, are basically rock stars, and then walk away, discarding the old institution as if it was just a stepping stone on their career goals. What am I? Chopped liver? Smile The problem is that I put a lot of my self into this job. You insult my job, you insult me. Ok, perhaps not the most healthy, but, frankly, I can’t imaging doing a job that I wasn’t passionate about. And Young being here for such a short time and then moving on (to what sounds like a much higher paid job) is difficult for me to swallow. Is money really all that drives presidents of large universities these days? I want a president that is as invested in my University as I am. And so this really “irks” me.

    Again, I have to ask. Texas A&M? Really? Winking smile

    Your office door open or closed? November 5, 2014

    Posted by gordonwatts in CERN, Office Space.
    3 comments

    WP_20141105_21_32_22_Pro You can probably tell a lot about an organization by the doors on the offices. Walking down its hallways, do the occupied offices have their doors open or closed? I have no idea what it means about the workplace!

    The reason I’m writing about this is because I noticed how many people keep their doors closed here at CERN! Part of the UW group just moved to a new building, B26, and the hallway is mostly empty as people have yet to move in. But then I realized that about half the closed doors have people in the offices, working.

    This bugs me.

    When I’m at UW I always have my office door open. The only times it gets closed are when I’m not there, when I need a 5 minute nap, or once in a great while when I need to be isolated in order to get some tricky and time sensitive work done.

    Frankly, when I close my office door, feel like I have shut out everyone around me. I’ve turned my office into my home office. No one from work ever comes by (well, ok, except for my wife). I am never bothered. I never interact. Interaction is one of the key points about work: bouncing ideas off each other, some quick discussion of the next steps for a paper with a student, etc. Walking by an office and remembering you need to need to discuss something with them. And in person discussion is much more efficient than email for any subject with some complexity.

    Besides teaching, this is the reason I go to the University to work, rather than just stay home all the time.

    And here at CERN – this is particle physics. Perhaps one of the most social sub-fields of physics. We have to work in groups – we have to interact.

    I get it – there are individuals who would get no peace if their door was not closed. But, frankly, there are not many people like that.

    Food And Physics May 24, 2014

    Posted by gordonwatts in Energy, ITER.
    2 comments

    I’ve been lucky enough to combine two of my favorite things recently: food and talking about physics. And by talking I mean getting outside my comfort zone and talking to non-physicists about what I do. After all, I love what I do. And I’m a bit of a ham…

    I’ve done two Science Café’s. If you don’t know what they are, I definitely suggest you lookup a local schedule. They are fantastic, and done all across the USA. There I’ve talked about particle physics, and the Higgs.

    But last night I went way out of my comfort zone and joined two other UW physicists to talk about ITER. Anna Goussiou, who does the same sort of physics I do, and Jerry Seidler and I all traveled to Shanik.

    It all started when the owner of Shanik, Meeru, got very excited reading an article in the March 3’rd New Yorker called Star in a Bottle (it is available online). It describes the history of ITER and its quest for cheap clean energy. This nicely dovetailed with two of Meeru’s (and many other peoples) interests: the environment and science. Meeru then went looking for a way to share her excitement with others – which is how I and Anna and Jerry ended up in her bar with about 40 people talking about ITER. We got free food. If you are living in Seattle, I definitely recommend visiting. Amazing food.

    As some context, check out the Livermore energy flow charts (from Lawrence Livermore National Lab). I’d not heard about these before. Click on the link, and check it out. It shows all the sources of energy (solar to petroleum) and how they are used (transportation, residential, etc.). One very nice thing: the total units add up to almost 100, so you can almost directly read the numbers as percent’s. And when it comes to bettering the environment we need to eliminate quite a chunk of the energy source. The hope is ITER can help with that.

    What is ITER? Jerry made what I thought was a great analogy. We have the nuclear bomb, which we have harnessed for peaceful purposes in the form of a nuclear reactor. Bomb to electricity. ITER, and other fusion based research projects, are attempting to harness the H-bomb (hydrogen bomb) for peaceful purposes in the same way. Unlike a nuclear reactor, however, the radiation is going to be minimal. It will not have nearly the waste problem that a nuclear reactor has.

    Frankly, I didn’t know much about ITER when this whole thing started. But the concept is pretty simple. You start with one of the major seed reaction in a star. You start with deuterium and tritium – both different forms of hydrogen (isotopes). If you can get them close enough to “touch”, they will bind to for Helium, an extra neutron, and a boat-load of energy. If you can capture that energy as heat, use it to boil water, then you can produce electricity by using the steam to run a turbine. The devil, however, is in the details. First, it takes a tremendous amount of force to get the deuterium and tritium close together. In the case of an H-Bomb a nuclear bomb is used to accomplish this! Obviously, you can’t blow up nuclear bombs in the middle of ITER! Second, when it starts to burn it is hot. Center of a star hot! Pretty much nothing can contain that – everything will melt! The ITER project, under construction now, thinks it has solved the heating problem and the confinement problem. ITER, big science, is very much that: a science experiment. There are decades of research that go into it, but there are some very real problems that remain to be solved and they can’t solve all of them until they build the machine and try it out.

    But lets say the machine works. What would it take to displace some of the dirtier forms of energy? It comes down to price. The fuel for an ITER like power-plant is going to be cheap. Very cheap. But the upfront costs are going to be high. The reactor is a serious bit of tech. The current ITER project is probably going to cost of order $20 billion USD. If it works, the second one will be much cheaper. This is very much like a nuclear reactor. The fuel, uranium, is very cheap. But the plant itself is quite expensive. Guesses put the cost higher than current fossil fuels, but not much more expensive.

    The current ITER project is also fascinating to me for another reason: it is a giant collaboration of many countries. Just like CERN, and my experiment, ATLAS. Only, ITER looks like it might be a little more dysfunctional than ATLAS right now. On the bright side, CERN did put together the world’s largest experiment, and it worked. So it should be possible.

    Last thing I wanted to mention was the cost. This is a big international project. Many countries (including the USA) are involved. And because of that there are some big issues. Each country is trying to reduce its cost and a local decision can affect other components in ITER generating a ripple effect – and delays and cost overruns (of which they have a lot). Could one country build ITER? Lets look at the USA. We have successfully run a few really big projects in our past – the Manhattan project and the Apollo program come to mind. These were each about 1% of GDP. The USA’s current GDP is about 16 trillion. 1% of that is about 16 billion per year. ITER could be built for about half or a quarter of that per year, given it was a 20 billion dollar project, and it would take about 4 or 5 years to build. So if you considered clean energy efforts like this of similar importance to these other projects, the USA could totally do it. Another sense of the scale of the project: the financial bailout was $780 billion or so.

    I have only one thing to say. Write your congress person and urge them to support all sorts of science research: be it ITER, solar power, or anything else. But get involved!

    Reproducibility… September 26, 2013

    Posted by gordonwatts in Analysis, Data Preservation, Fermilab, reproducible.
    4 comments

    I stumbled across an article on reproducibility recently, “Science is in a reproducibility crisis: How do we resolve it?”, with the following quotes which really caught me off guard:

    Over the past few years, there has been a growing awareness that many experimentally established "facts" don’t seem to hold up to repeated investigation.

    They made a reference to a 2010 alarmist New Yorker article, The Truth Wears Off (there is a link to a PDF of this article on the website, but I don’t know if it is legal, so I won’t link directly here).

    Read that quote carefully: many. That means a lot. It would be all over! Searching on the internet, I stumbled on a Nature report. They looked carefully at a database of medical journal publications and retraction rates. Here is a image of the retraction rates the found as a function of time:

     

     

    First, watch it for the axes here – multiply the numbers on the left by 10 to the 5th (100000), and numbers on the right by 10 to the –2 (0.01). IN short, the peak rate is 0.01%. This is a tiny number. And, as the report points out, there are two ways to interpret the results:

    This conclusion, of course, can have two interpretations, each with very different implications for the state of science. The first interpretation implies that increasing competition in science and the pressure to publish is pushing scientists to produce flawed manuscripts at a higher rate, which means that scientific integrity is indeed in decline. The second interpretation is more positive: it suggests that flawed manuscripts are identified more successfully, which means that the self-correction of science is improving.

    The truth is probably a mixture of the two. But this rate is still very very small!

    The reason I harp on this is because I’m currently involved in a project that contains reproducibility as one of its possible uses: preserving the data of the DZERO experiment, one of the two general purpose detectors on the now-defunct Tevatron accelerator. Through this I’ve come to appreciate exactly how difficult and potentially expensive this process might be. Especially in my field.

    Lets take a very simple example. Say you use Excel to process data for a paper you are writing. The final number comes from this spreadsheet and is copied into the conclusions paragraph of your paper. So you can now upload your excel spreadsheet to the journal along with the draft of the paper. The journal archives it forever. If someone is puzzled by your result, they can go to the journal and download the spreadsheet and see exactly what you did (aka modern economics papers). Win!

    Only wait. What if the numbers that you typed into your spreadsheet came from some calculations you ran. Ok. You need to include that. And the inputs to the calculations. And so on and so on. For a medical study you would presumably have to upload the anonymous medical records of each patient, and then everything from there to the conclusion about a drug’s safety or efficacy. Uploading raw data from my field is not feasible – it is petabytes in size. This is all ad-hoc – the tools we use do not track the data as they flow through them.

    As an early prof I was involved in a study that was trying to replicate and extend a result from a prior experiment. We couldn’t. The group from the other experiment was forced to resurrect code on a dead operating system, and figure out what they did – reproduce it – so they could ask our questions. The process took almost a year. In the end we found one error in that original paper – but the biggest change was just that modern tools were better had a better model of physics and that was the main reason we could not replicate their results. It delayed the publication of our paper by a long time.

    So, clearly, it is useful to have reproducibility. Errors are made. Bias gets involved even with the best of intentions. Sometimes fraud is involved. But these negatives have to be balanced against the cost of making all the analyses reproducible. Our tools just aren’t there yet and it will be both expensive and time consuming to upgrade them. Do we do that? Or measure a new number, rule out a new effect, test a new drug?

    Given the rates above, I’d be inclined to select the latter. And have a process of evolution of the tools. No crisis.

    Running a Workshop July 13, 2013

    Posted by gordonwatts in Conference, UW.
    4 comments

    DSC03874

    I ran the second workshop of my career two weeks ago. There were two big differences and a small one between this one and the first one I ran. First, the first one was an OSG workshop. It had no parallel sessions – this one had 6 running at one point. I had administrative help as part of our group back then – that luxury is long gone! And there were about 20 or 30 more people attending this time.

    In general, I had a great time. I hope most people who came to Seattle did as well. The weather couldn’t have been better – sun and heat. Almost too hot, actually. The sessions I managed to see looked great. Ironically, one reason I went after this workshop was to be able to attend the sessions and really see how this Snowmass process was coming along. Anyone who has organized one of these things would tell you how foolish I was: I barely managed to attend the plenary sessions. Anytime I stepped into a parallel room someone would come up to me with a question that required me running off to fetch something or lead them to some room or…

    There were a few interesting things about this conference that I thought would be good for me to write down – and perhaps others will find this useful. I’d say I would find these notes useful, but I will never do this again. At least as long as it takes me to forget how much work it was (~5 years???).

    First, people. Get yourself a few dedicated students who will be there from 8 am to 8 pm every single day. I had two – it would have been better with three. But they ran everything. This conference wouldn’t have worked without them (thanks Michelle Brochmann and Jordan Raisher!!!). It is amazing how much two people can do – run a registration desk, setup and wire a room for power, manage video each day, stand in a hallway and be helpful, track down coffee that has been delivered to a different building (3 times!)… I suppose no one job is all that large, but these are the sorts of things that if they are missing can really change the mood of a conference. People will forgive a lot of mistakes if they think you are making a good effort to make it right. Something I’m not totally sure I should admit. Winking smile

    The other thing I discovered for a workshop this size was that my local department was willing to be so helpful! Folder stuffing? Done for free by people in the front office. Printing up the agendas? No problem! Double checking room reservations? Yes! Balance the budget and make sure everything comes out ok? You bet! They were like my third hand. I’m sure I could have hired help – but given the total hours spent, especially by some high-end staff, I’m sure it would have cost quite a bit.

    DSC03893The budget was crazy. It has to be low to get people here – so nothing fancy. On the other hand, it has to be large enough to make everyone happy. What I really got tripped up by was I set the economic model about 3 or 4 weeks before the start of the conference. I had a certain amount of fixed costs, so after subtracting that and the university’s cut, I knew what to do for coffee break, I knew how much I could have and how often, etc. And then in the last two weeks a large number of people registered! I mean something like 40%. I was not expecting that. That meant the last week I was frantically calling increasing order sizes for coffee breaks, seeing if larger rooms were available, etc. As it was, some of the rooms didn’t have enough space. It was a close thing. Had another 20 shown up my coffee breaks would have had to be moved – as it was, it really only worked because the sun was out the whole conference so people could spill outside while drinking their coffee! So, next time, leave a little more room in the model for such a late bump. For the rest of you who plan to go – but wait till the last minute to register? Don’t!

    DSC03871Sound. Wow. When I started this I never thought this was going to be an issue! I had a nice lecture hall to seat 300 people, I had about 130 people in the end. The lecture’s sound system was great. Large over-head speakers, and wireless microphone. I had a hand-head wireless mike put in the room so capture questions. And there was a tap in the sound system that said audio out. There were two things I’d not counted on, however. First, that audio-out was actually left over from a previous installation and no longer worked. Worse, by the time I discovered it the university couldn’t fix it. The second thing was the number of people that attended remotely. We had close to a 100 people sign up to attend remotely. And they had my Skype address. I tried all sorts of things to pipe the sound in. One weird thing: one group of people would say “great!” and another would say “unacceptable!” and I’d adjust something and their reactions would flip. In the end the only viable solution was to have a dedicated video microphone and force the speakers to stand right behind the podium and face a certain way. It was the only way to make it audible at CERN. What a bummer!

    But this lead me to thinking about this situation a bit. Travel budgets in the USA have been cut a lot. Many less people are traveling right now; when we asked it was the most common reason given for not attending. But these folks that don’t attend do want to attend via video. In order for me to have done this correctly I could have thrown about $1000 at the problem. But, of course, I would have had to charge the people who were local – I do not think it is reasonable to charge the people who are attending remotely. As it was, the remote people had a rather dramatic effect on the local conference. If you throw a conference with any two-way remote participation, then you will have to budget for this. You will need at least two good wireless hand-held microphones. You will need to make sure there is a tap into your rooms sound system. Potentially you’ll need a mixer board. And most important you will have to set it up so that you do not have echo or feedback on the video line. This weirdness – that local people pay to enable remote people – is standard I suppose, but it is now starting to cost real money.

    For this conference I purchased a USB presenter from Logitech. I did it for its 100’ range. I was going to have the conference pay for it, but I liked it so much I’m going to keep it instead. This is a Cadillac, and it is the best working one I’ve ever used. I do not feel guiltily using it. And the laser pointer? Bright (green)! And you can set it up so it vibrates when time runs out.

    Another thing I should have had is a chat room for the people organizing and working with me. Something that everyone can have on their phone cheaply. For example, Whatsapp. Create a room. Then when you are at the supermarket buying flats of water and you get a call from a room that is missing a key bit of equipment, you can send a message “Anyone around?” rather than going through your phone book one after the other.

    And then there are some things that can’t be fixed due to external forces. For example, there are lots of web sites out there that will mange registration and collection money for you for a fee of $3-$4 bucks a registration. Why can’t I use them? Some of the equipment wasn’t conference grade (the wireless microphones cut out at the back of the room). And, wow, restaurants around the UW campus during summer can be packed with people!