Skip to Content.
Sympa Menu

sphenix-tracking-l - Re: [Sphenix-tracking-l] Did anyone manange to run Hijing lately?

sphenix-tracking-l AT lists.bnl.gov

Subject: sPHENIX tracking discussion

List archive

Chronological Thread  
  • From: sookhyun lee <dr.sookhyun.lee AT gmail.com>
  • To: Anthony Frawley <afrawley AT fsu.edu>
  • Cc: "sphenix-tracking-l AT lists.bnl.gov" <sphenix-tracking-l AT lists.bnl.gov>
  • Subject: Re: [Sphenix-tracking-l] Did anyone manange to run Hijing lately?
  • Date: Thu, 6 Jul 2017 13:05:15 -0500

Hi Tony,

I apologize for the late reply. (New messages from this thread does not show in Inbox as it is in a sub directory.. )

I used all the latest & greatest macros and TPC clusterizer, but I use local g4hough for code development.
From July 2nd, Sunday around 1:00PM, I was not seeing crazy number of hits and only 10% excess of hits. (with the new build that day and repeated compilation of g4hough automatically corrected issues.)
This does not seem a big issue now, but I think it might be improved further. Also, my sincere apologies for the false alarm to Carlos who did great work on this.

Best regards,
Sookhyun
 

 

On Mon, Jul 3, 2017 at 6:02 PM, Anthony Frawley <afrawley AT fsu.edu> wrote:

Hmm. That is interesting. Sookhyun was looking at clusters while the code was running and seeing 5 per layer in the TPC with the new code. Sookhyun, is it possible you were using old parameters or an old setup macro? What TPC clusterizing parameters were you using?


Thanks

Tony


From: Christof Roland <christof.roland AT cern.ch>
Sent: Monday, July 3, 2017 5:16 PM
To: Haiwang Yu
Cc: Anthony Frawley; sourav Tarafdar; sphenix-tracking-l AT lists.bnl.gov

Subject: Re: [Sphenix-tracking-l] Did anyone manange to run Hijing lately?
 
Hi Haiwang, 

pions show pretty much the same thing. See below.

Christof 

On 3. Jul 2017, at 22:53, Haiwang Yu <yuhw.pku AT gmail.com> wrote:

Hi Christof,

What about pions?

Haiwang

On Mon, Jul 3, 2017 at 16:40 Christof Roland <christof.roland AT cern.ch> wrote:
Hi All,

Haiwang is right, you have to turn off the scan_embed switch to get all clusters. 
To speed things up for Hijing and get at least ntuples for all reco objects
you can turn the matching off by setting eval->do_track_match(false).

To get the number of clusters per layer just plot:
 ntp_cluster->Draw("layer>>h(47,-0.5,46.5)","event==1&&layer>6")


In my standard 10 muon events I do not see excess clusters with the latest
version of the setup macro.

See attached plot.

Cheers

   Christof 


On 3. Jul 2017, at 20:23, Haiwang Yu <yuhw.pku AT gmail.com> wrote:


Hi Tony,

If you turn off the scan_embed switch in the SvtxEvaluator, the evaluator will output all clusters.
It takes a very long time to do that for Hijing. For this issue, I would think running single pions would be enough to see the nclusters/layer.

Cheers,
Haiwang


On Mon, Jul 3, 2017 at 12:14 PM Anthony Frawley <afrawley AT fsu.edu> wrote:

Hello Christof and Sourav,


Sookhyun is working on track seeding, so she was looking at all clusters, not clusters associated with tracks. Not sure how to see that from the evaluator ntuples ....


Tony




From: sourav Tarafdar <sourav.pheonix AT gmail.com>
Sent: Monday, July 3, 2017 11:53 AM
To: Christof Roland
Cc: Anthony Frawley; sphenix-tracking-l AT lists.bnl.gov
Subject: Re: [Sphenix-tracking-l] Did anyone manange to run Hijing lately?
 
Hi Christof and Tony,

Since yesterday I already did three iterations of Hijing embedded 100 muons per event. I didn’t really face any memory related issue. I submitted 1000 condor jobs in each iterations with 2 Hijing events per job. I have been registering only embedded muons. The phase space of the muons are 1. < pT < 50 GeV, 2 units of eta , and 2*pi in phi. 

So far I also didn’t notice the issue in clustering pointed by Sookhyun. I see about 0.94 clusters per true track-layer in pure muon events where muons have the same phase space as quoted by me for Hijing embedded muons.

Best regards,
-Sourav

On Jul 3, 2017, at 10:31 AM, Christof Roland <christof.roland AT cern.ch> wrote:

Hi Tony, 

thanks a lot for your response. Good to see that some events actually do finish. 
I'll try a diffent test event ans see if I get some result there. 

I am not sure I can reproduce the clustering problem. 
With the current setup I get about 0.94 hits per particle-layer crossing
for muon events. Did Sookhyun run the latest greatest macro?

Cheers

   Christof 

On 3. Jul 2017, at 17:07, Anthony Frawley <afrawley AT fsu.edu> wrote:

Hi Christof,

Over the weekend I have been running 2000 jobs at a time with central Hijing+ 100 pions + 1 upsilon. About 1400-1500 of them do not reach the 20 GB limit and finish within ~ 1.5 hours. Of the rest, some crash (~50), ~250 are put on hold when they get too big, and ~ 200 are running but are just taking too long. So after ~ 1500 jobs finish I "condor_rm" the rest and start another 2000. 

It is not very satisfactory, but I need to get these done so I can make performance plots that are needed now. We need to solve the clustering issue pointed out by Sookhyun, since that is likely the problem.

Tony


From: sPHENIX-tracking-l <sphenix-tracking-l-bounces AT lists.bnl.gov> on behalf of Christof Roland <christof.roland AT cern.ch>
Sent: Monday, July 3, 2017 10:25 AM
To: sphenix-tracking-l AT lists.bnl.gov
Subject: [Sphenix-tracking-l] Did anyone manange to run Hijing lately?
 
Hi Everybody, 

even with the more recent change to the TPC macro I am still having trouble running Hijing events. 
The jobs blow up to 20GB and then just hang. These are jobs that just do run simulation up to cluster 
level and then write to disk.

Did anyone mange to run any hijing events to the end lately?

Also in the log file root throws an error:
Error in <TBufferFile::WriteByteCount>: bytecount too large (more than 1073741822)
Has anyone seen these?

Thanks for your input

   Christof
_______________________________________________
sPHENIX-tracking-l mailing list
sPHENIX-tracking-l AT lists.bnl.gov
https://lists.bnl.gov/mailman/listinfo/sphenix-tracking-l
To see the collection of prior postings to the list, visit the sPHENIX-tracking-l Archives. Using sPHENIX-tracking-l: To post a message to all the ...

_______________________________________________
sPHENIX-tracking-l mailing list
sPHENIX-tracking-l AT lists.bnl.gov
https://lists.bnl.gov/mailman/listinfo/sphenix-tracking-l

_______________________________________________
sPHENIX-tracking-l mailing list
sPHENIX-tracking-l AT lists.bnl.gov
https://lists.bnl.gov/mailman/listinfo/sphenix-tracking-l

<hitsperlayer.gif>


_______________________________________________
sPHENIX-tracking-l mailing list
sPHENIX-tracking-l AT lists.bnl.gov
https://lists.bnl.gov/mailman/listinfo/sphenix-tracking-l





Archive powered by MHonArc 2.6.24.

Top of Page