Monday, April 25, 2005

LHC: Gigabyte per second transfer works

The Large Hadron Collider will create a huge amount of data - and one of the big tasks is to transfer the data to other labs where they can be effectively investigated. LHC is expected to produce 1,500 megabytes per second for ten years or so, or, according to other sources, 15,000 terabytes per year. At any rate, it will be the most intense source of data in the world.



Figure 1: The Canterbury Cathedral is small enough to fit the LHC's Compact Muon Solenoid (CMS), as argued here.

It's a pleasure to inform you that the GridPP project (60 million dollars) has passed an important test, the "Service challenge 2". For a period of 10 days, eight labs (Brookhaven and places in the EU) were receiving 600 megabytes per second from CERN (yes, it's not 1 GB/s yet, as announced in the title, but it will be). It would take at least 2,500 years for my modem ;-) to transfer the total amount of data, namely 500 terabytes.

The current acceptance rate is 70 MB per second only, and in a series of steps, they plan to increase it roughly to 400 MB per second. Further reading:


For comments about the support from IBM and their breakthough performance & storage visualization software, click here or here.




I wonder whether these guys have figured out the best methods to organize the data. For example, they might borrow a couple of guys from Google to invent better methods to "index" the events, and other guys to design various useful lossy compressing mechanisms. How many of my readers understand this stuff?