[Rivet] Rivet 1.2.0 validation and release plan

Eike von Seggern jan.eike.von.seggern at physik.hu-berlin.de
Tue Jan 5 09:45:22 GMT 2010


On Mon, Jan 04, 2010 at 23:54 +0000, Andy Buckley wrote:
> Hi everyone, and happy new year!
> 
> Hope you had a good break and are feeling recharged. The ongoing saga of 
> Rivet 1.2.0 may not be the first thing that you want to come back to (so 
> much for the "release in a month" goal), but fortunately I think most 
> (all?) of the development goals are accomplished: see 
> http://projects.hepforge.org/rivet/trac/report/3 . The key thing that 
> now needs to happen before release is some serious systematic testing of 
> the analyses, and fixing those which are currently in the "unvalidated" 
> group. Sorry that this is a rather big email... but please read it and 
> provide me with some rapid (i.e. immediate!) feedback.
> 
> As you know, we have pressure to provide Rivet for early LHC MC tuning 
> and validation, and so our release timescale must be in the next two 
> weeks to allow for installation and testing before the first 
> (non-ALICE!) LHC minimum bias analyses are made public. A citeable 
> manual is also needed, so please improve the doc/rivet-manual.tex file 
> if you get the inclination. I will be taking a fairly inclusive approach 
> to authorship of this, and will check that no-one feels aggrieved before 
> pushing it on to the arXiv.
> 
> So: comprehensive validation. Yes, it's boring, and no, we aren't 
> releasing without it. Since we now have around 80 analyses to check -- 
> each of which needs some knowledge about the correct run conditions -- 
> reliable validation is a non-trivial task. We've had a couple of 
> attempts at making frameworks for this before, but we now need to come 
> up with something that will definitely work... and which we can run 
> automatically in regular checks and future releases.

I don't know the analyses well enough. So I'll stick to checking for
installation troubles on the variety of SL breeds I use.

>    My feeling is that the best way is to be *very* simple: in Rivet's 
> SVN repo we collect a set of scripts which run generator(s) in a way 
> suitable for each analysis and write the output to a standard HepMC 
> pipe... let's call it hepmc.fifo. For example, a script for an analysis 
> which requires min bias events could be tested with Pythia via AGILe with
> 
> ---
> #! /usr/bin/env bash
> ## In CDF_1990_S123456-Pythia6.sh
> agile-runmc Pythia:6421 -p MSEL=2 -n 10000 -o hepmc.fifo
> ---
> 
> Alternatives using Sherpa etc. could be set up: they would have to write 
> the steering card from within the script to keep it neat. Does that 
> sound feasible? Any problems, comments, lack of vision on my part?

Only one thing about Sherpa: Sherpa expects the HepMC input file to end
with ".hepmc2g" and I don't know if that can be switched off.



More information about the Rivet mailing list