About GeantV

Motivations for the project

  • Massive-parallelism. Moore’s law has to be reinterpreted. Because you do not get more speed for free, you get more optimisation opportunities to exploit
  • Difficult to ask Funding Agencies for (much) more computing and, at the same time, confess that we use only part of the “bare iron”
  • “Embarrassing parallelism” and “throughput computing” reduced the push to shorten “time to solution”
  • HEP has “missed” several trains
    • Vectorisation (IBM VM, Cray X-MP)
    • Low parallelism (IBM VM, Cray X-MP)
    • Moderate parallelism (GPMIMD machine)
    • High parallelism (IBM SP2)
    • Heterogeneous parallelism
  • Trivial (job-level) parallelism and evolution of clock cycle was enough
  • But now the bangs-per-bucks for us is a monotonic decreasing function and this affects also throughput

 

The “dimensions of performance”

 

  • Vectors
  • Instruction Pipelining
  • Instruction Level Parallelism (ILP)
  • Hardware threading
The first 4 dimensions give micro-parallelism, therefore a gain in throughput and in time-to-solution
  • Clock frequency
gives very little gain and therefore no action is expected to be taken
  • Multi-core
  • Multi-socket
Gain in memory footprint and time-to-solution but not in throughput
  • Multi-node
Possibly running different jobs as we do now is the best solution

 

Why simulation

 

  • The most CPU-bound application we have with large room for speed-up
  • It is largely experiment independent
    • Unlike reconstruction or analysis
  • It is one of the most time-consuming activities in HEP computing
  • Precision depends on (sqrt of) the number of events
  • Improvements (in geometry for instance) and techniques are expected to feed back into reconstruction