Skip to Content.
Sympa Menu

sphenix-tracking-l - [Sphenix-tracking-l] We have a timing problem...

sphenix-tracking-l AT lists.bnl.gov

Subject: sPHENIX tracking discussion

List archive

Chronological Thread  
  • From: Christof Roland <christof.roland AT cern.ch>
  • To: Anthony Frawley via sPHENIX-tracking-l <sphenix-tracking-l AT lists.bnl.gov>
  • Subject: [Sphenix-tracking-l] We have a timing problem...
  • Date: Tue, 2 Nov 2021 14:38:53 +0100

Hi Everybody,

following up on the discussion on timeing performance of our current code
I ran a few benchmark jobs. 1000 jobs one event each hijing 0-20 + 50kHz of
pileup.
These are jobs submitted from sphnx02 so I assume even if there are a few
slow machines this will not dominate the results.

Time per event is here:
InttClusterizer 1000 0.014891 sec
MvtxClusterizer 1000 0.088210 sec
TpcClusterizer 1000 2.677374 sec
TpcClusterCleaner 1000 0.045161 sec
PHActsSiliconSeeding 1000 509.086792 sec
PHActsVertexPropagator 1000 0.044235 sec
PHCASeeding 1000 3.974943 sec
PHSimpleKFProp 1000 3.371190 sec
PrePropagatorPHTpcTrackSeedCircleFit 1000 0.108033 sec
PHTpcTrackSeedCircleFit 1000 0.107930 sec
PHTrackCleaner 1000 0.007340 sec
PHGhostRejection 1000 0.211178 sec
PHSiliconTpcTrackMatching 1000 1.440864 sec
PHActsFirstTrkFitter 1000 2.409158 sec
PHSimpleVertexFinder 1000 0.044909 sec
PHRaveVertexing 1000 0.409105 sec
PHGenFitTrackProjection 1000 0.000386 sec
SvtxEvaluator 1000 418.793579 sec

It looks like all modules touching the actual hits are horribly slow now.
This looks like in our recent changes to the local coordinate storage we
introduced
an inefficient loop to look up things.

I'll try to run a job through callgrind now to see if I can trace this down
some more.

cheers

Christof





Archive powered by MHonArc 2.6.24.

Top of Page