Running a Workshop July 13, 2013Posted by gordonwatts in Conference, UW.
I ran the second workshop of my career two weeks ago. There were two big differences and a small one between this one and the first one I ran. First, the first one was an OSG workshop. It had no parallel sessions – this one had 6 running at one point. I had administrative help as part of our group back then – that luxury is long gone! And there were about 20 or 30 more people attending this time.
In general, I had a great time. I hope most people who came to Seattle did as well. The weather couldn’t have been better – sun and heat. Almost too hot, actually. The sessions I managed to see looked great. Ironically, one reason I went after this workshop was to be able to attend the sessions and really see how this Snowmass process was coming along. Anyone who has organized one of these things would tell you how foolish I was: I barely managed to attend the plenary sessions. Anytime I stepped into a parallel room someone would come up to me with a question that required me running off to fetch something or lead them to some room or…
There were a few interesting things about this conference that I thought would be good for me to write down – and perhaps others will find this useful. I’d say I would find these notes useful, but I will never do this again. At least as long as it takes me to forget how much work it was (~5 years???).
First, people. Get yourself a few dedicated students who will be there from 8 am to 8 pm every single day. I had two – it would have been better with three. But they ran everything. This conference wouldn’t have worked without them (thanks Michelle Brochmann and Jordan Raisher!!!). It is amazing how much two people can do – run a registration desk, setup and wire a room for power, manage video each day, stand in a hallway and be helpful, track down coffee that has been delivered to a different building (3 times!)… I suppose no one job is all that large, but these are the sorts of things that if they are missing can really change the mood of a conference. People will forgive a lot of mistakes if they think you are making a good effort to make it right. Something I’m not totally sure I should admit.
The other thing I discovered for a workshop this size was that my local department was willing to be so helpful! Folder stuffing? Done for free by people in the front office. Printing up the agendas? No problem! Double checking room reservations? Yes! Balance the budget and make sure everything comes out ok? You bet! They were like my third hand. I’m sure I could have hired help – but given the total hours spent, especially by some high-end staff, I’m sure it would have cost quite a bit.
The budget was crazy. It has to be low to get people here – so nothing fancy. On the other hand, it has to be large enough to make everyone happy. What I really got tripped up by was I set the economic model about 3 or 4 weeks before the start of the conference. I had a certain amount of fixed costs, so after subtracting that and the university’s cut, I knew what to do for coffee break, I knew how much I could have and how often, etc. And then in the last two weeks a large number of people registered! I mean something like 40%. I was not expecting that. That meant the last week I was frantically calling increasing order sizes for coffee breaks, seeing if larger rooms were available, etc. As it was, some of the rooms didn’t have enough space. It was a close thing. Had another 20 shown up my coffee breaks would have had to be moved – as it was, it really only worked because the sun was out the whole conference so people could spill outside while drinking their coffee! So, next time, leave a little more room in the model for such a late bump. For the rest of you who plan to go – but wait till the last minute to register? Don’t!
Sound. Wow. When I started this I never thought this was going to be an issue! I had a nice lecture hall to seat 300 people, I had about 130 people in the end. The lecture’s sound system was great. Large over-head speakers, and wireless microphone. I had a hand-head wireless mike put in the room so capture questions. And there was a tap in the sound system that said audio out. There were two things I’d not counted on, however. First, that audio-out was actually left over from a previous installation and no longer worked. Worse, by the time I discovered it the university couldn’t fix it. The second thing was the number of people that attended remotely. We had close to a 100 people sign up to attend remotely. And they had my Skype address. I tried all sorts of things to pipe the sound in. One weird thing: one group of people would say “great!” and another would say “unacceptable!” and I’d adjust something and their reactions would flip. In the end the only viable solution was to have a dedicated video microphone and force the speakers to stand right behind the podium and face a certain way. It was the only way to make it audible at CERN. What a bummer!
But this lead me to thinking about this situation a bit. Travel budgets in the USA have been cut a lot. Many less people are traveling right now; when we asked it was the most common reason given for not attending. But these folks that don’t attend do want to attend via video. In order for me to have done this correctly I could have thrown about $1000 at the problem. But, of course, I would have had to charge the people who were local – I do not think it is reasonable to charge the people who are attending remotely. As it was, the remote people had a rather dramatic effect on the local conference. If you throw a conference with any two-way remote participation, then you will have to budget for this. You will need at least two good wireless hand-held microphones. You will need to make sure there is a tap into your rooms sound system. Potentially you’ll need a mixer board. And most important you will have to set it up so that you do not have echo or feedback on the video line. This weirdness – that local people pay to enable remote people – is standard I suppose, but it is now starting to cost real money.
For this conference I purchased a USB presenter from Logitech. I did it for its 100’ range. I was going to have the conference pay for it, but I liked it so much I’m going to keep it instead. This is a Cadillac, and it is the best working one I’ve ever used. I do not feel guiltily using it. And the laser pointer? Bright (green)! And you can set it up so it vibrates when time runs out.
Another thing I should have had is a chat room for the people organizing and working with me. Something that everyone can have on their phone cheaply. For example, Whatsapp. Create a room. Then when you are at the supermarket buying flats of water and you get a call from a room that is missing a key bit of equipment, you can send a message “Anyone around?” rather than going through your phone book one after the other.
And then there are some things that can’t be fixed due to external forces. For example, there are lots of web sites out there that will mange registration and collection money for you for a fee of $3-$4 bucks a registration. Why can’t I use them? Some of the equipment wasn’t conference grade (the wireless microphones cut out at the back of the room). And, wow, restaurants around the UW campus during summer can be packed with people!
Getting WiFi in a conference of online addicts is hard January 1, 2011Posted by gordonwatts in Conference, physics life.
This post was triggered by an article pointing out some fundamental limitations of WiFi and tech conferences I saw.
Last month in San Francisco at the Web 2.0 Summit, where about 1,000 people heard such luminaries as Mark Zuckerberg of Facebook, Julius Genachowski, chairman of the Federal Communications Commission, and Eric E. Schmidt of Google talk about the digital future, the Wi-Fi slowed or stalled at times.
I like the way one of my students, Andy Haas, put it once. He was giving a talk at a DZERO workshop on the Level 3 computer farm and trying to make a point about the number and type of computers that were in the farm. He drew an analogy to the number of laptops that were open in the room. It can be a little spooky – almost everyone has one, and almost everyone has them open during conference talks. In Andy’s case there were about 100 people in the room. And when you are giving the talk you have to wonder: how many people are listening!?
There is another side-effect, however. It is rare that the hotel, or whatever, is ready for the large number of devices that we particle physicists bring to a meeting. In the old days it was a laptop per person and now add in a cell phone that also wants a internet connection. Apparently most conference organizers used to use to guess that it would be about 1 in 5 people would have a portable that needed a connection at any one time. Folks from particle physics, however, just blew that curve! The result was often lost wifi connections, many seconds to load a page, and an inability to download the conference agenda! As conference organizer we have long ago learned that is one of the most important things to get right – and one of the key things that will be used to judge the organization of your conference.
The article is interesting in another aspect as well (other that pointing out a problem we’ve been dealing with for more than 10 years now). WiFi is not really designed for this sort of use. Which leads to the question – what is next?
Presentation September 20, 2009Posted by gordonwatts in computers, Conference.
Ok. Really. This is my last post of Video for a while. Ever since I started the DeepTalk project I’ve started to be much more aware of how conference data is put out on the web. So it has become a bit of a soap-box for me.🙂 But this is the last one for a while, I promise.
- Pycon 2009 – the annual Python conference. At first I was hopeful about this – the web page is quite nice and you’ll notice right at the top there is a nice iCal link so you can download the schedule. However, the schedule is just that – a schedule. You can’t get access to the links to the talks or video from there. Associated with the web page is a RSS feed too – which is excellent – I could now use my pod-cast software (any software should be able to read it) and I could download the audio of the whole event. Sweet. However, there is no way to connect the slides and the video or audio together via a program (as far as I can tell). The video looks like it is all archived on blip.tv. The beauty of this system is that it makes files availible in lots of formats (see this talk, click on the “files and links” to see). AND there is a small little RSS link at the bottom – so I can get all the talks down as video to my podcast software (the default seems to be the MP4 format, which satisfies most of my requirements as a good video format). So this conference has made its schedule available in a standard format (iCal), made all of its videos available in a standard format (blip.tv). I’d like to see some integration between the two so that one could find the slides, abstract, and video together, using a program.🙂
- Strings ‘07 Conference – a conference on strings. The conference website is basically a series of static web pages – including the schedule (I’ve extracted that page – but you can get to it by looking at the home page –> Scientific Program –> Speakers&Titles). There are links to the slides and Video. The video is in MP4 format (fantastic!). None of this is discoverable, unfortunately, by a program – you would have to scrape the web page in order to find it. Chimpanzee, who has left a lot of comments on these video posting, has done some work with this conference, putting it in iTunes as a show. Unfortunately, unless you have iTunes installed, this is not very useful as it brings you to an Apple page that asks you to download and install ITunes. However, Chimpanzee did put this on blip.tv as a several shows (one show per day – I think from the point of view of subscribing I’d have preferred a single show for the whole conference). Also, the nice RSS feeds to blip.tv are well hidden. So, well done with mp4 and PDF files up there. The blip.tv solution is quite nice, again. The static web page that links them together isn’t so good – it isn’t very discoverable, unfortunately.
- Lepton-Photon 2009 – The agenda is posted in the standard agenda software in use in HEP, Indico, which makes it easily exportable. Each talk has a link to the PDF as well as a Video link. Unfortunately, the Video leads to a RealMedia file – which my open source tools cannot play. So the video format doesn’t pass muster.
I am pleasantly surprised by blip.tv. It looks like a very nice service. I have no idea what their business model is. The good news is that people won’t watch most talks from a physics conference very much – so they will require very little bandwidth.
No conference gets it quite right (IMHO), but they all come close. From my point-of-view, combining Indico with blip.tv seems like a fairly ideal solution given current technology constraints.
Two quick notes. First, there has been a hope that perhaps HTML5 would standardize a single video format – and we could all just depend on all browsers running it without having to install plugins like the security-ridden Flash or RealMedia. This is not to be, however. There is an excellent blog series for those of you who want to know what is happening to HTML5 that I stumbled on. This posting makes it clear that a preferred video format no longer exists in the standard (for details, see the change log for the standard).
Second, I keep holding up Indico as a nice way to post meeting agendas. But perhaps there is a standard for this sort of thing? A microformat or perhaps something form the Semantic Web? Then Indico (and everyone else) could produce that for various tools to parse. I only took a brief search, but didn’t find anything.
Time shifting Video: Recording September 6, 2009Posted by gordonwatts in computers, Conference.
The question I have is the on-site effort and expense. Take the PyCon setup: any clue what synch software they used? Because of the zooming, they had a person with a camera. Maybe I’ve not noticed, but having the slides small and the person large is an interesting idea. With the slides separately available in full-resolution, one could use the on-screen slide images as just a key to tell you when to actually click on the full size ones. Usually, it’s the other way with the person being very small and the slides larger. In fact, pedagogically, having the viewer then have to manipulate something during the talk would keep them in the game, so to speak.
Ok, there are several questions. First point: I want to be able to view this stuff on my MP3 player – so “keeping someone in the game” is not what I have in mind for that sort of viewing.🙂
Now, the more important thing: cost of recording. There was a reply to this from Tim:
Why don’t you just record the video from the camera and the input to the projector? This would seem like an easy way to get synchronized slides.
For some dumb reason that hadn’t occurred to me – get a VGA splitting and hook its input up to your computer. The Lepton-Photon folks seem to have basically done that:
Judging from the quality of the slides (which is worse here because this was a low resolution image), I’d guess they had a dedicated camera recording the slides rather than actually looking at the computer output. A second stream focused on the presenter and they can use common post-processing tools to combine the two streams as they have above. In fact, the above is from a real-time stream. I don’t know what tool they used, but I can think of a few open-source ones that wouldn’t have too much difficulty as long as you had a decent CPU behind you. On caveat here: in a production environment I have no idea how hard it is to capture two streams and keep them in sync. If they are on two computers they you need software to make sure they start at the same time. Or if there is a glitch and you loose one, etc.
Chip also asks the key question:
what did it cost?
I’m not sure what the biggest expense for these things is – but it is usually culprit is the person doing the work – so I’ll go with that. To record a conference I assume you need to setup the video, run the system while it is recording, and then post-process the video to make it available on the web. The post processing could be fairly time consuming: you have to find where each talk ends and the next one begins, cut the talks, render the final video, etc.
Thinking about this, it seems like one could invest a little money up front and perhaps drop the price quite a bit. First, making software to record the two streams and keeping track of the sync can’t be too hard to write. On the windows platform I’ve seen plenty of samples using video and doing real-time interpretation. Basically, at the end of the day you would want two files with synchronization information: one with video focused on the slides, and the other on the person (with a decent audio pickup!)
If one wants to stream the conference live – that is harder. I don’t know enough about streaming technology to know how it would fit in above without impacting the timing – which is fairly important for the next step.
A human could probably recognize almost the complete structure of the conference from the slide stream alone. I suspect we could write a computer program to do something similar. Especially if we also handed the computer program the PDF’s of all the talks. Image comparison is probably decent enough that it could match almost every slide to the slide stream. As a result you’d get a complete set of the timings for the conference – when the title slide when up, when the last slide was reached, when the next talk started. Heck, even when every single slide was transitioned. You could then use these timings to automatically split the video into talk-by-talk video files. Or generate a timing file with detailed information (I’d love slide-by-slide timing for my deeptalk project). During this step you could also combine the two streams, much as is done in the above live stream I recorded. You could even discard the slide stream and put high quality images from the PDF in its place.
I doubt this would be perfect, but I bet it would get you 90% of the way there. It would have trouble at the edges – before the conference started, for example. Or if someone gives a talk with no slides or slides that are very different from the ones it is given to parse. But, heck, that is to be fixed in Version 2.0. I do not know if 90% is good enough for a project like this.
Seems like a perfect small inter-disciplinary project between CS and physics (with a small grant for one year of work).🙂 I wonder how far fetched this is?
Time shifting a Conference: Video Formats August 31, 2009Posted by gordonwatts in computers, Conference.
I got a few interesting comments when I wrote about discoverability of conference video the other day. I’ve been away for a week so instead I thought I’d write a whole new post. But I wanted to change the topic a little bit – motivated by two things: Lepton/Photon and a comment by chimpanzee. And sorry if this gets a little technical… I’m on a rant here!
First Lepton/Photon. The conference is over now. Check out the agenda:
Click on it and… they require the RealPlayer to be installed. Bummer. The RealVideo format is a proprietary video format. You have to have RealPlayer installed in order to use it. There are some open source implementations of RealVideo out there, but I think they do an older version of the RealVideo format (for example, VLC claims to know what to do with the RealVideo streams, but falls over before it plays anything). For most people that may not be a big deal – just install RealVideo. I personally have a problem with the RV software. But for the purposes of this post my problem is I can’t download the video stream and pack it up into my mp3 player (I have a Zune). For $40 bucks, I might be able to do it for an iPod (not clear from their website).
If anyone knows a way to play those above files without having to use the RealPlayer software, I’d love to know!
Second, there was the comment by chimpanzee. I recommend reading the fully thing – I’m going to cherry pick for this post and stick with video formats:
For those who love “competition” when it comes to codecs – the competitive war just heated up. Google, just bought On2/VP8 and apparently is going to Open Source VP8.
I know nothing about VP8 other than what is on on2’s web site. It is currently a proprietary video codec. And chimpanzee says in his comment it seems reasonable that Google would open source it. For any of you that have downloaded video files from the internet (Bad! Bad!) you already know there are a plethora of video formats out there and one often needs to install lots of different codecs to get them all to play. VP8 will not solve this, at least not in the near term (<5 years). But this got me thinking – we can solve this now, can’t we?
So, I have a modest proposal. Physics conferences should archive the video of their conferences in a format that plays natively on the n most used operating systems out-of-the-box (where n is > …):
- Linux – this is funny. In HEP we mostly use Scientific Linux. This is not optimized for watching video. So choose the most popular distro – Ubuntu I think? I used to have a distro of that running on my laptop but had to delete it for space reasons, so I couldn’t test it…
- Mac – A recent version of OSX running on Intel Mac’s.
- WIndows – this is tricky. XP is the most popular version out there, however the OS is quite old and plays almost nothing modern out of the box. Plug-ins are available (including my favorite – wow I hate the new sourceforge) that will allow it to play almost anything. So that isn’t good. Vista, I think, is in the same boat – it doesn’t have much in the way extra codecs. W7, however, supports most formats I’ve seen out there (I couldn’t find the docs on microsoft site, but I did find this which matches my experience with the release candidate). So, I think we are almost forced to pick Windows 7 for the Windows branches.
- iPhone/mobile – this I’m not too worried about. Usually if the host system can play it (iTunes, Zune, etc.) then it can be transcoded and placed on the moble device.
Given all this, it strikes me that MP4 is the only video format that comfortably fits into this. There are plenty of open source tools – heck, tools in general – that allow you to manipulate it to your hearts content. Play it on your mobile player, your TV, heck, most modern burner software can burn it to a DVD if you want. Further, if you are like me, and want to manipulate the video for whatever reasons, well, you can because there are so many tools.
So, that is my modest proposal for archiving.
Streaming is more complex. I don’t know as much about streaming. I’d be inclined to vote for MP4, but I’m not sure how well it works in a streaming protocol.
I have to sneak one vacation picture in…🙂 The French countryside is pretty amazing… This is near the salt-flats out side of Rochelle.
Timeshifting A Conference: Can we all agree? Please? August 21, 2009Posted by gordonwatts in computers, Conference, DeepTalk, Video.
A video feed or recording of a big physics conference is a mixed blessing.
If there is a video recording of a huge conference – like DPF – it would be 100’s of hours long. Many of the parallel sessions describe work that is constantly being updated – so it isn’t clear that if you posted the video how long it would be relevant. I’ve seen conferences just post video of plenary sessions and skip the parallel sessions for I imagine this very reason.
I definitely appreciate it when one of the big conferences does furnish video or streaming. But I have a major problem: time shifting. Even if I’m awake during the conference it is rare I can devote real time to watching it. Or if there is a special talk I might have to try to arrange my schedule around the special talk. But, come on folks – we’ve solved this problem, right? Tivo!?!? Or for us old folks, it is called a VCR!!!
Which brings me to the second issue with conference video. Formats. For whatever reason the particle physics world has mostly stuck to using RealMedia of one form or another. Ugh. I was badly burned back in the day with the extra crap that RealMedia installed on my machine so I’m gun shy now. But the format is also hard to manipulate. I tried a recent version of their player (maybe about 6 months ago) and they have a nice recording feature – exactly what I need here. But I couldn’t figure out how to convert its stored format to mp4 or other things to download to my mp3 player! There are some open source implementations out there – but I’ve never encountered one that has been good enough to reliably parse these streams.
This year’s Lepton-Photon is trying something new. They are streaming in RealMedia, but they also have a mp4 stream. And the free VLC player can play it. What is better is the free VLC player can record it! And convert it! Hooray!!! I can now download and convert these guys and listen/watch them on my commute to work and back, which is perfect for me (the picture above is a screen capture of the stream in VLC). The picture isn’t totally rosy, however. VLC seems to loose the stream every now-and-then. So when I’m recording it I have to watch the player like a hawk and restart it. Sometimes it will go two hours between drops, and other times just 10 minutes. It would be nice if it would auto-restart.
Which brings me to the last problem. Discoverability. I really like the way my DeepTalk project puts up a conference as a series of talks. But the only reason it works is because the conference is backed by a standard agenda/conference tool, Indico. My DeepTalk tools can interface with that, grab the agenda in a known format, and render it. We have no such standard for video.
Wouldn’t it be great if everyone did it the same way? You could point your iTunes/Zune/RealMedia/Whatever tool at a conference, it would figure out the times the conference ran, schedule a recording for streams, or if the video was attached, it would download the data… you’d come back after the conference was over, click the “put conference on my mp3 player” and jump on that long plane flight to Europe and drift off to sleep to the dulcet sounds of someone describing the latest update to W mass and how it has moved the most probably Higgs mass a few GeV lower.
Would that be bliss, or what!?
DPF & Lepton-Photon August 20, 2009Posted by gordonwatts in Conference, DeepTalk, physics.
add a comment
It is conference season! Whee!
A few weeks ago the main American particle physics conference, DPF occurred. This is a big conference with lots of plenary and parallel sessions:
At the time I was a short distance away from Detroit, in Ann Arbor, being a Dad. It was a bummer not to be able to attend. I made sure the conference was rendered on my DeepTalk site (picture grab from above). I spent a few lunches the other day browsing it – there are some excellent talks – I definitely recommend checking it out!
This week it is the big Lepton-Photon conference here in Europe. They are simul-casting it as well, so I’m doing my best to watch and record bits of it (more on that in another post). I see someone already submitted that to DeepTalk, so it is partly rendered already. Unfortunately, DeepTalk can’t yet tell that the conference is still ongoing, so it doesn’t automatically update itself. I’ll make sure that happens over the weekend.
Long Lived Particles Break HEP May 13, 2009Posted by gordonwatts in Conference, physics.
In my last post I mentioned that long lived particles break some basic assumptions that we make in the way we design our software and hardware in HEP. One fascinating example of this that was brought into clear relief for me at this workshop is the interaction between Monte Carlo generation and detector simulation. Look again at the picture I had up last time:
While what I’ve shown above is real data, lets imagine it was a simulation for sake of discussion. Simulation is crucial – it allows us to compare what we think we know against nature. You might imagine that the code that generates everything that happens at the very center of that picture on the left is different from the code that propagates the particles out through the detector (the green, yellow, and blue lines). In fact, this is exactly how we structure our code in HEP – as a two step process.
The first step is to generate the actual physics interaction. Say a top quark production, or Higgs production and decay, or Hidden Valley decay. As output the generator produces a list of particles and what directions they are heading. Most of them will then stream through the detector leaving tracks and data similar to the right side of that above picture. At this point we’ve got the starting point for all those “lines” or particle trajectories on the left.
Then the detector simulator program takes over. Its job is to simulate the detector. It takes each one of the particles and steps it, a millimeter at a time, through the detector. As it moves through the detector it decides if it should loose some energy interacting with the material, or leave a signal in a detector, etc. Once the simulation is done what we have is something that looks like the experiment was actually run – we can feed it through the same software that we use for real data to find electrons, tracks, etc.
But some of these long lived particle models have particles that interact as they move through the detector. The Quirk model is the poster-boy for this (odd, a model without a web page! At least that I could find). As pairs of these move through the detector they interact with each other and with the material they are traveling through. In short – the detector simulation has to act a bit like the generator – we are mixing these two things.
The main detector simulation program (GEANT4) – written in C++ and carefully planned out – does not look anything like an event generator – written in FORTRAN (common blocks!? ‘nuff said – wait, that was flame bait, wasn’t it?). My guess is it will take a year or so to get GEANT4 updated to accommodate models like Quirks. While it isn’t a complete rewrite of the package – it was quite generally designed – the GEANT4 folks probably didn’t think of a modification to allow interactions like this as a possibility.
Which makes me wonder if in the future generators will really just be subroutines (methods, sub-classed objects, etc.) in detector simulations?🙂 We all know that detectors are the most important things out there, after all!
Hidden Valley Workshop May 11, 2009Posted by gordonwatts in Conference, Hidden Valley, physics.
I spent a very enjoyable week attending a workshop here at UW – Workshop on Signatures of Long-Lived Exotic Particles at the LHC. These workshops are funded by the DOE – and allow us to fly in a small list of experts to discuss a particular topic for a week. As you might imagine, things can get pretty intense (in a good way!).
The point about long lived particles is they are long lived! And not much else in the standard model is long lived the way these guys can be. Sure, a bottom quark might travel a few millimeters – and most of us tend to call that long-lived. But the things considered at this workshop can go much furthers – meters even. All sorts of models can generate these particles – like SUSY or Hidden Valley.
Nothing in a particle physics experiment is really designed for these things – not the hardware and not the software, certainly. Not clear our brains are thinking about them too well either! This is part of what makes them so fascinating!
Take the hardware, for example. Just about everything in the Standard Model decays very quickly after it is created in a collider. Millimeters:
That is an exploded schematic view of what happens in our detector (this is a CDF event, I’ve stolen, from Fermilab). The inner circle on the left is about 2 inches in diameter. You see the exploded view on the right? The distance between the vertex and the secondary vertex is about a millimeter or so. That is normal long lived particle for particle physics. All of our code and the design of our detectors are built to discover exactly those kinds of long lived particles.
Pick The Trends March 27, 2009Posted by gordonwatts in CHEP, computers, Conference.
I’m at CHEP 2009 – a conference which combines physics with computing. In the old days it was one of my favorites, but over the past 10 years it has been all about the Grid. Tomorrow is the conference summary – and I sometimes play a game where I try to guess what the person who has to summarize the whole conference is going to say – the trends, if you know what I mean.
- Amazon, all the time (cloud computing). Everyone is trying out its EC2 service. Some physics analysis has even been done using it. People have tried putting large storage managers up on it. Cost comparisons seem to indicate it is almost the same cost as using the GRID – and so might make a lot of sense when used as overflow. As far as cost goes, it is not at all favorable for storing large amounts of data (actually, transferring it in and out of the service).
- Virtualization. I’ve been saying this for years myself, and it looks like everyone else is saying it now too. This is great. It is driven partly by the cloud – which uses virtualization – and that was something I definitely did not foresee. Cloud computing and virtualization go hand-in-hand. CERNVM is my favorite project along these lines, but lots of people are playing with this. I’ve even seen calls to make the GRID more like the loud.
- Multi-core. This is more wishful thinking on my part, but it should be more of a trend than it is. The basic problem is that as we head towards 100 cores on a single chip there is just no way to get 100 events’ worth of data onto the chip – the bandwidth just isn’t there. Thus we will have to change how we processes the data, spending multiple CPU’s on the same data – something no one in HEP does up to now. One talk mentioned that problems will occur at about 24 CPUs on a single core.
- CMS has put together a bunch of virtual control rooms. Now everyone in CMS wants one (80% in a few years). These are supposed to be used for both outreach and also remote monitoring. This seems successful enough that I bet ATLAS will soon have its own program. I’m not convinced how useful it is.🙂
- It is all about the data! Everyone now says running jobs on the GRID isn’t hard, it is feeding them the data. Cynically, I might say that was only because we now know how to run those jobs – several years ago that was the problem. This is a tough nut to crack. To my mind, for the first time, I see all the bits in place to solve this problem; but nothing really works reliably yet.
- And now a non-trend. I keep expecting HEP to gather around a single database. That hasn’t happened now, and so I don’t think it ever will! That is both good and bad. We have a mix of open source and also vendor supplied solutions in the mix.
Ok – there are probably other small ones, but these are the big ones I think Dario will mention in his final talk.
UPDATE: So, how did I do? Slides should appear here eventually.
- Data – and its movement was the top problem on his list.
- GRID – and under this was Cloud computing. He made the suggestions that some GRID’s should move to look more like cloud – no one reacted.
- Performance – optimization, multi-core and many core appeared under this as well.
So, I didn’t do too badly. The big ones I had were all addressed. He had a very cool word analysis of the abstracts – which I have to figure out how to do.
I’ve got some good notes from the conference, I’ll try not to get too distracted by teaching next quarter (ahem) and post some of them in the near future.