[Rivet] Rivet 1.2.0 validation and release plan

Andy Buckley andy.buckley at ed.ac.uk
Mon Jan 4 23:54:23 GMT 2010


Hi everyone, and happy new year!

Hope you had a good break and are feeling recharged. The ongoing saga of 
Rivet 1.2.0 may not be the first thing that you want to come back to (so 
much for the "release in a month" goal), but fortunately I think most 
(all?) of the development goals are accomplished: see 
http://projects.hepforge.org/rivet/trac/report/3 . The key thing that 
now needs to happen before release is some serious systematic testing of 
the analyses, and fixing those which are currently in the "unvalidated" 
group. Sorry that this is a rather big email... but please read it and 
provide me with some rapid (i.e. immediate!) feedback.

As you know, we have pressure to provide Rivet for early LHC MC tuning 
and validation, and so our release timescale must be in the next two 
weeks to allow for installation and testing before the first 
(non-ALICE!) LHC minimum bias analyses are made public. A citeable 
manual is also needed, so please improve the doc/rivet-manual.tex file 
if you get the inclination. I will be taking a fairly inclusive approach 
to authorship of this, and will check that no-one feels aggrieved before 
pushing it on to the arXiv.

So: comprehensive validation. Yes, it's boring, and no, we aren't 
releasing without it. Since we now have around 80 analyses to check -- 
each of which needs some knowledge about the correct run conditions -- 
reliable validation is a non-trivial task. We've had a couple of 
attempts at making frameworks for this before, but we now need to come 
up with something that will definitely work... and which we can run 
automatically in regular checks and future releases.
   My feeling is that the best way is to be *very* simple: in Rivet's 
SVN repo we collect a set of scripts which run generator(s) in a way 
suitable for each analysis and write the output to a standard HepMC 
pipe... let's call it hepmc.fifo. For example, a script for an analysis 
which requires min bias events could be tested with Pythia via AGILe with

---
#! /usr/bin/env bash
## In CDF_1990_S123456-Pythia6.sh
agile-runmc Pythia:6421 -p MSEL=2 -n 10000 -o hepmc.fifo
---

Alternatives using Sherpa etc. could be set up: they would have to write 
the steering card from within the script to keep it neat. Does that 
sound feasible? Any problems, comments, lack of vision on my part?

The one thing that troubles me is that I'd like to be able to scale up 
the number of events externally so that the same scripts could be used 
for high and low stats testing. Maybe even that is too ambitious, but a 
modification to make the standard form something like this would work:

---
#! /usr/bin/env bash
## In CDF_1990_S123456-Pythia6.sh
NEVT=10000
if [[ "$1" ]]; then
   let NEVT="$NEVT*$1"
fi
agile-runmc Pythia:6421 -p MSEL=2 -n $NEVT -o hepmc.fifo
---

You get the idea... but this is about as complex as I want them to be.

I'd like to decide on a basic structure for these scripts within a day, 
and then I want everyone to chip in and make -- and run -- these scripts 
for "your" analyses. Untested = unvalidated. The turnaround time needs 
to be on the scale of the next couple of weeks, but note that most 
analyses don't need high stats runs: the UE ones are unusual in that 
respect. I think it's do-able, even though everyone has many more things 
to be doing. Thanks!

Andy

-- 
Dr Andy Buckley
SUPA Advanced Research Fellow
Particle Physics Experiment Group, University of Edinburgh


More information about the Rivet mailing list