<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hello everybody,</p>
<p>I would like to give you the current status of my work on Rivet
for the use in heavy-ion physics.</p>
<p>The basic requirements as discussed previously are:</p>
<p>1) It should be possible to read YODA files and to process them
either standalone or in addition to an MC+Rivet run.</p>
<p>2) There has to be a possibility for a further processing of
analysis objects after they are finalized.</p>
<p>According to the discussion I have chosen approach #3 (see email
from 10.08.2016) and implemented these requirements into the Rivet
2.5.1 source code which can be found on <a
class="moz-txt-link-freetext"
href="https://gitlab.cern.ch/bvolkel/rivet-for-heavy-ion/tree/master">https://gitlab.cern.ch/bvolkel/rivet-for-heavy-ion/tree/master</a>.
The additional capabilities are briefly described in the
following:</p>
<p><br>
</p>
<p>AnalysisHandler::readData( const std::string& filename )</p>
<p>This method stores pointers to the read objects in a member
AnalysisHandler::_readObjects handler if and only if the YODA
objects have more than 0 entries. It can be called as often as
required. Because of that there should be a sufficient and
reasonable handling for objects with coinciding paths. So far,
older ones are just replaced by new ones. It could be possible to
think about a proper merging based on the steps in the finalize()
method of a given analysis.<br>
</p>
<p><br>
</p>
<p>Analysis::replaceByData( std::map< std::string,
AnalysisObjectPtr > )</p>
<p>Whenever a path of an object booked in Analysis::init() matches
the path of another one found in the map the content of the booked
pointer is replaced by the content of the read object.</p>
<p><br>
</p>
<p>Analysis::post()</p>
<p>This method can be called further modifications of finalized YODA
objects. No modifications are applied to those objects which are
used to compute others and hence it is possible to call this
method as often as desired without spoiling the analysis.<br>
</p>
<p><br>
</p>
<p>AnalysisHandler::post()</p>
<p>This method calls the Analysis::post() for each analysis loaded
by the handler.</p>
<p>It is also possible to only read YODA files without having a
generator run. In this case AnalysisHandler::post() can be called
after the read-in of YODA files: Analysis::init() will be called,
the analysis objects are booked and Analysis::analyze( const
Event& event ) and Analysis::finalize() are skipped. After
that, the objects are replaced by those found in the loaded YODA
files and finally, Analysis::post() is called for each analysis
and the read objects are processed.</p>
<br>
<p>There are still open questions and issues. Three of the more
important ones might be:<br>
</p>
<p>- Although it is in principle already possible to merge YODA
files, it is desirable to provide a general way of implementing
more complex merging procedures in order to account for complex
finalize() methods.<br>
</p>
<p>- In the future a standardized way of how to incorporate and
handle centrality and other heavy-ion specific information must be
provided. This might turn out to be mostly a HepMC issue.<br>
</p>
<p>- Another important aspect of heavy-ion analyses are scenarios
where certain fits should be applied to finalized histograms, for
instance to determine peak widths. To be able to perform such a
fit, external packages are required in addition to the standard
headers used in the analysis class. Hence, there might be the need
of a general and standardized way of including external packages
and other (C++) libraries which is not limited to fitting.<br>
</p>
<p>Note, that I also tidied up the gitlab repository ( <a
class="moz-txt-link-freetext"
href="https://gitlab.cern.ch/bvolkel/rivet-for-heavy-ion/tree/master">https://gitlab.cern.ch/bvolkel/rivet-for-heavy-ion/tree/master
</a>) such that you can now easily get the differences between the
original code and the small extensions I added.</p>
<p>I'm looking forward to getting your feedback and it would be very
helpful if you could tell me whether the extensions are reasonable
so far.<br>
</p>
<p>Many thanks again, cheers,</p>
<p>Benedikt<br>
</p>
<p><br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 15.08.2016 16:06, Benedikt Volkel
wrote:<br>
</div>
<blockquote cite="mid:94d9b889-5abf-e887-886c-b470549c04c7@cern.ch"
type="cite">Hej Leif, hej Andy, <br>
<br>
so far I have two approaches to take centrality into account using
<br>
JEWEL. Since there is no place designated for centrality in the <br>
heavy-ion block of HepMC, JEWEL uses the place designated for the
impact <br>
parameter and prints the centrality there. <br>
<br>
In the first approach, I run JEWEL over the full centrality
region. If <br>
there are centrality dependent histograms, I decide during the
analysis <br>
which one should be filled by evaluating the HepMC heavy-ion block
'H', <br>
checking the impact parameter and therefore the centrality in this
<br>
special case. <br>
<br>
In a second approach one might only need a certain centrality
region in <br>
an analysis. I have modified mcplots slightly and it can account
for <br>
that by initializing JEWEL with the desired region. Basically, the
first <br>
approach is fully contained in the second one. <br>
<br>
In any case, I check the centrality within the Rivet analysis.
This <br>
could be crucial because by accident one could run an MC in a
different <br>
centrality region than it is assumed by the analysis. Not checking
the <br>
centrality could therefore screw up the entire analysis. To
account for <br>
that I do basically <br>
<br>
const float centr =
event.genEvent()->heavy_ion()->impact_parameter(); <br>
<br>
in the Rivet analysis. <br>
<br>
If there would be a standardized way of printing the centrality to
the <br>
HepMC output, both ways described above can be used in general and
not <br>
just in the case of JEWEL. <br>
<br>
I hope, this answers especially Leif's question. What do you
think? <br>
<br>
Cheers, <br>
<br>
Benedikt <br>
<br>
<br>
On 15.08.2016 10:39, Andy Buckley wrote: <br>
<blockquote type="cite">It may be what you were going to
suggest/advertise, Leif, but in <br>
HepMC3 "custom attributes" can be attached to event objects,
meaning that generators have a place to store centrality
information unlike in HepMC2. This may be a good motivator for
HI experiment and theory to use HepMC3 -- at least that was the
intention! And I'd be happy to extend Rivet to handle this info,
once we have some feedback on what would be useful. <br>
<br>
Andy <br>
<br>
<br>
On 15/08/16 08:32, Leif Lönnblad wrote: <br>
<blockquote type="cite">Hi Benedict, <br>
<br>
I was also planning to do some Heavy Ion developments for
rivet, and I'm <br>
very interested in your suggestions. I have not quite made up
my mind <br>
about 1, 2, or 3 yet, but I agree with Andy that your option 3
may be <br>
combined with the planned facility for re-running finalize()
on multiple <br>
yoda files. <br>
<br>
However, one thing that was not clear from your description is
how to <br>
handle centrality, which is essential in any R_AA measurement.
Do you <br>
have any ideas on that? <br>
<br>
Cheers, <br>
Leif <br>
<br>
<br>
<br>
<br>
On 2016-08-12 18:39, Benedikt Volkel wrote: <br>
<blockquote type="cite">Hej Andy, hej Frank, <br>
<br>
thanks for your replies! It is great that you are interested
in a <br>
discussion. <br>
<br>
Basically, the proposed solution #2 would be a very easy
solution <br>
because it is both fast and simple to implement and there is
no need to <br>
extend Rivet. However, there are major drawbacks arising
from further <br>
desired capabilities which affects also proper resource
management such <br>
that #1 or #3 should be preferred over #2. <br>
<br>
Especially in the case of R_AA analysis it is interesting to
combine the <br>
output of different AA generators with different pp
generators. If only <br>
a direct read-in from HepMC files is possible, there are two
ways of <br>
doing that. Firstly, one can save the entire HepMC files of
single runs <br>
in order to pass the desired combinations to Rivet
afterwards. This <br>
method requires a lot of disk space and deleting the files
means a <br>
complete new MC run in case it is needed again. The other
way is the one <br>
Frank suggested. In this approach the problem is the large
amount of <br>
time because for every combination, two complete MC runs are
required. <br>
<br>
To overcome the drawbacks it would be nice to be able to
recycle <br>
generated YODA files and to put them together afterwards in
any desired <br>
combination. This saves both computing power/time and disk
space. <br>
<br>
More generally, the fact that #2 means not a full
integration in Rivet <br>
leads to the question of how certain pre/post-processes are
handled in a <br>
standardized manner and how the right ones are matched to
certain <br>
analyses. It might be difficult to ensure that something
like <br>
<br>
$ rivet -a EXPERIMENT_YEAR_INSPIRE fifo.hepmc <br>
<br>
still works. Consequently, there might be some actual
paper-based <br>
analyses which cannot be handled by non-Rivet-experts. <br>
<br>
<br>
Finally, a solution according to #3 might be preferred over
#2 and #1 <br>
because <br>
<br>
-> everything, including general post-processing steps,
can be handled <br>
only by Rivet, <br>
<br>
-> resources are saved (in a more general way, not only
regarding R_AA <br>
analyses), <br>
<br>
-> there are other scenarios like those Andy mentioned
which could be <br>
also handled in this approach. <br>
<br>
What do you think about that? Again, we are glad about
getting your <br>
feedback and additional ideas. <br>
<br>
Cheers, <br>
<br>
Benedikt <br>
<br>
On 11.08.2016 22:10, Andy Buckley wrote: <br>
<blockquote type="cite">Hi Benedikt, <br>
<br>
Thanks for getting in touch -- sounds like a productive
project. <br>
<br>
Actually, version 3 sounds a lot like what we have had in
mind for <br>
some time, as part of the development branch for handling
events with <br>
complex generator weightings: we planned to be able to
initialise <br>
Rivet analyses with pre-existing YODA files, populating
the <br>
"temporary" data objects and re-running the finalize()
function. <br>
<br>
We intended this mainly so that YODA files from
homogeneous, <br>
statistically independent parallel runs could be merged
into one <br>
mega-run, and then the finalize() steps re-run to
guarantee that <br>
arbitrarily complicated end-of-run manipulations would be
correctly <br>
computed. But it sounds like it would be similarly useful
for you... <br>
<br>
Thanks for the pointer to your code. Can I ask how long
you will be <br>
working on this project for? I look forward to the
discussion and <br>
hopefully some of us will be able to meet in person at
CERN, too... <br>
but if you would still be available at the end of
September you would <br>
be very welcome (and we can pay) for you to attend our
3-day developer <br>
workshop. <br>
<br>
Cheers, <br>
Andy <br>
</blockquote>
<br>
On 11.08.2016 09:41, Frank Siegert wrote: <br>
<blockquote type="cite">Dear Benedikt, <br>
<br>
thanks for your mail and for describing your problem and
solutions <br>
very clearly. <br>
<br>
I want to throw into the discussion a 4th alternative,
which is <br>
somewhat similar to your #2 but doesn't need any
modifications of <br>
Rivet itself. I have used this approach with a Bachelor
student when <br>
we were trying to determine non-perturbative corrections
with <br>
iterative unfolding, i.e. we needed both hadronised and
non-hadronised <br>
events in the analysis at the same time to fill the
response matrix. <br>
Thus, for us it was important to preserve the correlation
between <br>
hadronised and non-hadronised event, which for you is not
an issue, so <br>
maybe this method is not necessary or more complicated for
you, but I <br>
thought I'd mention it nonetheless. <br>
<br>
We are running a standalone pre-processor script which
combines the <br>
HepMC files from the two generator runs, and by using
appropriate <br>
particle status codes embeds the non-hadronised event into
the <br>
hadronised one. We then wrote an analysis plugin including
a custom <br>
projection, which can extract separately (based on the
particle <br>
status) the non-hadronised event and the hadronised event
from the <br>
same HepMC file. This allowed us not just to fill the two
sets of <br>
histograms in the same run (and divide them in finalize),
as you would <br>
want to do it, but also fill a response matrix with
correlated events, <br>
which you probably don't care about. <br>
<br>
So basically all you would need is a pre-processing script
to combine <br>
the HepMC files, which could possibly be included in your
HI generator <br>
interface and thus not disrupt the workflow. But maybe
this is too <br>
complicated, and given that you don't need the
correlations you might <br>
be better off with your approach #1. <br>
<br>
Cheers, <br>
Frank <br>
</blockquote>
<br>
<blockquote type="cite"> <br>
<br>
On 10/08/16 21:55, Benedikt Volkel wrote: <br>
<blockquote type="cite">Dear Rivet developers, <br>
<br>
my name is Benedikt Volkel and I'm working on a summer
student <br>
project in ALICE with Jan Fiete Grosse-Oetringhaus and
Jochen Klein. <br>
The goal is to extend the mcplots project to cover
specific needs <br>
arising from heavy-ion analyses. In particular, we want
to implement <br>
a post-processing step, which is frequently required for
heavy-ion <br>
analyses. This step must take place after the production
of certain <br>
analysis output, e.g. to combine results from different
generator <br>
runs. As mcplots is based on the standard Rivet work
flow, the <br>
questions do not apply just to mcplots but more general
to Rivet. To <br>
sketch the problem in more detail and start a discussion
on a <br>
possible implementation of a standardized
post-processing step we <br>
use the example of an R_AA analysis as a starting point.
<br>
<br>
The conceptual problem of an R_AA analysis is the
combination, here <br>
a division, of heavy ion (AA) data and pp data. The two
types of data <br>
are provided by different generator runs. We will always
assume that <br>
Rivet can figure out whether it gets events from an AA
generator or <br>
a pp generator. This differentiation could be done by
evaluating the <br>
heavy ion block 'H' in a given HepMC file and/or by
reading the beam <br>
types. We have investigated the following 3 approaches
and would <br>
like to ask for comments and feedback: <br>
<br>
1) External script: In this approach we don't modify the
Rivet <br>
framework at all. The analysis is run independently
for the two <br>
generators (pp and AA), in each case only one type of
histograms <br>
is filled while the other stays empty. In the end, we
use an <br>
external program/script to process the YODA output of
both runs <br>
and perform the division. This can be done by using
the YODA <br>
library and hence easily in Python or C++. <br>
<br>
Comments: So far there is no standard way to
distribute or <br>
execute such a post-processing executable. A standard
work flow <br>
to include a post-processing step would be desirable.
A <br>
standardized hook to execute an arbitrary external
script might <br>
provide more flexibility because those external
scripts could be <br>
written in Python or C++ and could have an almost
arbitrary level <br>
of complexity. <br>
<br>
2) Specific executable, Rivet as library: In this case I
wrote an <br>
executable which takes care of creating one instance
of <br>
Rivet::AnalysisHandler and manages the read-in of two
HepMC <br>
files. I based the code on the source code of the
executable <br>
$MCPLOTS/scripts/mcprod/rivetvm/rivetvm.cc
implemented in <br>
mcplots. My modifications are sketched in [1]. In
this way, both <br>
data sets are available in the same analysis and the
division can <br>
simply be done in the finalize step. <br>
<br>
Comments: It is also already possible on the
commandline to pass <br>
two or more HepMC files to Rivet for sequential
processing. <br>
<br>
3) The goal of my last approach was to enable Rivet to
produce <br>
reasonable analysis output without external
dependences. <br>
Furthermore, it should be possible to have
asynchronous <br>
production of pp and heavy ion YODA files independent
from each <br>
other, bringing those together using only Rivet.
Therefore, Rivet <br>
was modified to allow reading back the YODA output.
This <br>
allows us to implement also the post-processing in
the analysis <br>
class. <br>
<br>
Comments: You can find the code on <br>
<a class="moz-txt-link-freetext"
href="https://gitlab.cern.ch/bvolkel/rivet-for-heavy-ion/tree/working">https://gitlab.cern.ch/bvolkel/rivet-for-heavy-ion/tree/working</a>.
<br>
The basic steps can be found in [2] and more comments
can be <br>
found directly in the source code. <br>
<br>
For the R_AA analysis, Rivet can be first run with a
pp generator. <br>
In the resulting YODA file only the pp objects are
filled. In a <br>
second run, with the AA generator, Rivet can be
started passing <br>
the first YODA file as additional input. In the Rivet
analysis <br>
itself the heavy ion objects are filled. However,
after <br>
finalize(), a method Analysis::replaceByData is
called. The <br>
objects normally produced by the analysis are
replaced by those <br>
from the provided YODA file if they have a finite
number of <br>
entries. Hence, after the replacement there are
filled and finalized <br>
pp objects coming from the first run and AA objects
from the second <br>
run. Those can now be used in an newly introduced
post() method <br>
which manages e.g. the division of histograms in case
of the R_AA <br>
analysis. It is also possible to provide a YODA file
where both <br>
the pp and the AA objects are filled and the R_AA
objects have to <br>
be calculated. No actual analysis is done (0 events
from an MC), but <br>
init(), analyze(...), finalize() and the post()
method are called. <br>
The <br>
histograms are booked, nothing happens in analyze(..)
and <br>
finalize(), the corresponding histograms are replaced
after <br>
finalize() <br>
and post() handles the division. Basically, this is a
similar <br>
approach as <br>
the one in scenario 1) but no external dependences
are involved. <br>
Also, <br>
scenario 2) remains possible if the YODA input is
avoided. <br>
<br>
All methods work and lead to the desired output. The
first two do <br>
not need an extension of the Rivet source code. While
first one allows <br>
for the largest amount of flexibility, the second one is
the one <br>
which can be implemented most quickly and where all
steps can be <br>
encapsulated in one analysis class in Rivet. However,
always two MC <br>
generators runs are required at the same time. Finally,
there is one <br>
thing all approaches have in common, though, namely the
extension of <br>
the rivet executable and related ones, in order to
account for these <br>
analyses types in the command line: 1) linking to the
external <br>
post-processing script in a consistent way, 2) parallel
processing <br>
of at least two HepMC files, 3) read-in of a YODA file.
<br>
<br>
I hope that I could explain the issues in a reasonable
manner. Jan, <br>
Jochen and me are looking forward to a fruitful
discussion of how to <br>
implement analyses like the one mentioned above in a
reasonable way. <br>
Please give us critical feedback concerning the
approaches and let us <br>
know if there are more appropriate ways of solving our
problems which I <br>
haven't accounted for yet. A consistent and
straightforward way of <br>
implementing those analyses in Rivet would be extremely
helpful. <br>
<br>
Best regards and many thanks, <br>
<br>
Benedikt <br>
<br>
<br>
<br>
---------------------Appendix--------------------- <br>
<br>
<br>
[1] Sketch of the modification of <br>
$MCPLOTS/scripts/mcprod/rivetvm/rivetvm.cc <br>
<br>
--------------------------------- <br>
... <br>
<br>
ifstream is1( file1 ); <br>
ifstream is2( file2 ); <br>
<br>
... <br>
<br>
AnalysisHandler rivet; <br>
<br>
HepMC::evt1; <br>
HepMC::evt2; <br>
<br>
rivet.addAnalyses( RAA_analysis ); <br>
<br>
while( !is1 || !is2 ) { <br>
<br>
... <br>
evt1.read(is1); <br>
evt2.read(is2); <br>
<br>
<br>
analyze(evt1); <br>
analyze(evt2); <br>
... <br>
<br>
} <br>
<br>
... <br>
--------------------------------- <br>
<br>
[2] Basics of the small extension of Rivet <br>
<br>
Rivet::Analysis: <br>
Introducing member: bool _haveReadData <br>
Introducing: void post() <br>
Introducing: void Analysis::replaceByData( std::map<
std::string, <br>
AnalysisObjectPtr > readObjects ) <br>
<br>
Rivet::AnalysisHandler: <br>
Introducing members: std::map< std::string,
AnalysisObjectPtr > <br>
_readObjects and bool _haveReadData <br>
Introducing: void AnalysisHandler::readData(const
std::string& <br>
filename) <br>
_______________________________________________ <br>
Rivet mailing list <br>
<a class="moz-txt-link-abbreviated"
href="mailto:Rivet@projects.hepforge.org">Rivet@projects.hepforge.org</a>
<br>
<a class="moz-txt-link-freetext"
href="https://www.hepforge.org/lists/listinfo/rivet">https://www.hepforge.org/lists/listinfo/rivet</a>
<br>
</blockquote>
<br>
<br>
</blockquote>
<br>
_______________________________________________ <br>
Rivet mailing list <br>
<a class="moz-txt-link-abbreviated"
href="mailto:Rivet@projects.hepforge.org">Rivet@projects.hepforge.org</a>
<br>
<a class="moz-txt-link-freetext"
href="https://www.hepforge.org/lists/listinfo/rivet">https://www.hepforge.org/lists/listinfo/rivet</a>
<br>
</blockquote>
<br>
<br>
_______________________________________________ <br>
Rivet mailing list <br>
<a class="moz-txt-link-abbreviated"
href="mailto:Rivet@projects.hepforge.org">Rivet@projects.hepforge.org</a>
<br>
<a class="moz-txt-link-freetext"
href="https://www.hepforge.org/lists/listinfo/rivet">https://www.hepforge.org/lists/listinfo/rivet</a>
<br>
</blockquote>
<br>
<br>
</blockquote>
<br>
</blockquote>
<br>
</body>
</html>