star-fwd-software-l AT lists.bnl.gov
Subject: FWD Software
List archive
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production
- From: David Kapukchyan <david.kapukchyan AT email.ucr.edu>
- To: "Ogawa, Akio" <akio AT bnl.gov>
- Cc: "Brandenburg, Daniel" <star-fwd-software-l AT lists.bnl.gov>
- Subject: Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production
- Date: Wed, 25 Feb 2026 13:23:49 -0800
Hello Akio, Please see comments below On Wed, Feb 25, 2026 at 12: 04 PM Ogawa, Akio <akio@ bnl. gov> wrote: Hello So StMuFcsAnaRun22Qa is directly from MuDST, and StMuFcsAnaCheckFillClusPoint is from re-running maker directly from StEvent
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.
ZjQcmQRYFpfptBannerEnd
Hello Akio,
Please see comments below
On Wed, Feb 25, 2026 at 12:04 PM Ogawa, Akio <akio AT bnl.gov> wrote:
Hello
So StMuFcsAnaRun22Qa is directly from MuDST, and StMuFcsAnaCheckFillClusPoint is from re-running maker directly from StEvent on memory?or from recreated Mudst on memory?
StMuFcsAnaRun22Qa is reading hits, clusters, and points directly from mudst. StMuFcsAnaCheckFillClusPoint is re-running cluster maker and point maker from hits read from mudst.
In StEvent, there are cluster collection for each detectorId, and cluster has id() which is 0,1,2… for each detectorId
In Mudst (and PicoDST) there is single cluster/point collection among all detectorId, and common id() among all detectorId. Is that the cause of confusion?
No. I am aware that this is the case for reading from mudst and running the cluster/point makers.
Is there anything wrong you see from StMuFcsAnaCheckFillClusPoint from StEvent? Can I conclude that problem is MuDst writing/reading around cluster-point association/Ids?
Yes I think it is a problem reading/writing MuDsts around cluster-point associations. For example if you check line 134 you will see that reading the mudst shows 2 points for this cluster. However when you look at line 157, which should be the corresponding cluster on line 134 based on the sigma min max values, you see that rerunning the cluster maker now only shows 1 point for this cluster. When you count the number of points you do see 9 points in both instances. The 9 points makes sense if you believe the output from rerunning the cluster maker. Therefore the cluster on line 134 should say 1 point. This is one example but it does show up again when I look at more events.
For sigma-max near 0 (1e-14 ~ 1e-17) , it must be single tower cluster, or 2 or more towers but only in a row or a column, where sigma-min should be 0. Can you confirm?
I will need some time to check this.
Akio
===
From: David Kapukchyan <david.kapukchyan AT email.ucr.edu>
Date: Monday, February 23, 2026 at 8:34 PM
To: Ogawa, Akio <akio AT bnl.gov>
Subject: Re: Strange behavior of points and clusters in Run 22 production
Hello Akio,
I ran my code to check micro dsts clusters and points against the cluster maker and point maker from StEvent and I am seeing some weird things. sometimes the number of points is correct on the point but not on the cluster. Sometimes the mudst cluster has 2 points but clustermaker finds 2 points even though they have the same sigma min and max. The output from 10 events can be seen here /direct/star+u/dkap7827/Fcs2019/FcsAsymRun22/check.dump. The ` ========== StMuFcsAnaRun22Qa::FillFcsInfo Start ==========` shows the output from Mudst and `========== StMuFcsAnaCheckFillClusPoint::DoMake Start ==========` shows output from StEvent, Cluster and Point Makers. If you have some time this week maybe we can chat over zoom?
Best,David
On Thu, Feb 19, 2026 at 11:49 AM David Kapukchyan <david.kapukchyan AT email.ucr.edu> wrote:On the top of that page there is a link to Jerome’s how to page on drupal which is where i got most of that information but i thought it would be easier to have my own because I don’t always use the scheduler :)-David
On Thu, Feb 19, 2026 at 11:27 AM Ogawa, Akio <akio AT bnl.gov> wrote:Thank you
Okey I sort of knew I need to do this. But I found nothing on this on STAR computing page. Is there? I searched and turned up nothing.You should get Jerome to put your instruction on STAR computing page somewhere obvious.
I’ll follow your instruction.Akio
From: David Kapukchyan <david.kapukchyan AT email.ucr.edu>
Date: Thursday, February 19, 2026 at 12:39 PM
To: Ogawa, Akio <akio AT bnl.gov>
Subject: Re: Strange behavior of points and clusters in Run 22 production
Hello Akio,
I have been using the Alma 9 nodes for my jobs and would suggest you switch over as well. I wrote a how-to here https://ucr-rhic.github.io/how-tos/star_alma9.html. Sometimes "no machine found" also happens but when I release it usually finds one after it. I think less and less nodes are becoming available on the rcas machines which is probably why it wasn't able to find a node.
Best,David
On Thu, Feb 19, 2026 at 8:38 AM Ogawa, Akio <akio AT bnl.gov> wrote:Thanks! I guess I’m doing something wrong with queue? Switching to new machine/OS or something?
Executable = runpicoUniverse = vanillanotification = nevergetenv = TrueAccounting_group = group_star.casArguments = "23001002 10 /star/u/akio/fcstrk10 /gpfs01/star/pwg_tasks/FwdCalib/akio/202602"Log = /gpfs01/star/pwg_tasks/FwdCalib/akio/202602/log/23001002.logOutput = /gpfs01/star/pwg_tasks/FwdCalib/akio/202602/log/23001002.logError = /gpfs01/star/pwg_tasks/FwdCalib/akio/202602/log/23001002.logQueue
>condor_q -better-analyze 15621701.0
-- Schedd: rcas6006.rcf.bnl.gov : <130.199.48.126:9618?...The Requirements _expression_ for job 15621701.000 is
(TARGET.Arch == "X86_64") && (TARGET.OpSys == "LINUX") && (TARGET.Disk >= RequestDisk) && (TARGET.Memory >= RequestMemory) &&((TARGET.FileSystemDomain == MY.FileSystemDomain) || (TARGET.HasFileTransfer))
Job 15621701.000 defines the following attributes:
DiskUsage = 1FileSystemDomain = "rcf.bnl.gov"RequestCpus = 1RequestDisk = ifThenElse(DiskUsage =!= undefined,DiskUsage,1000 * 5000)RequestMemory = ifThenElse(MemoryUsage =!= undefined,ifThenElse(MemoryUsage > 1500 * RequestCpus,MemoryUsage,1500 * RequestCpus),1500 * RequestCpus)
The Requirements _expression_ for job 15621701.000 reduces to these conditions:
SlotsStep Matched Condition----- -------- ---------[0] 14352 TARGET.Arch == "X86_64"[1] 14352 TARGET.OpSys == "LINUX"[3] 14352 TARGET.Disk >= RequestDisk[5] 13818 TARGET.Memory >= RequestMemory[7] 14352 TARGET.FileSystemDomain == MY.FileSystemDomain
No successful match recorded.Last failed match: Thu Feb 19 11:10:13 2026
Reason for last match failure: no match found
15621701.000: Run analysis summary ignoring user priority. Of 148 machines,0 are rejected by your job's requirements0 reject your job because of their own requirements0 match and are already running your jobs0 match but are serving other users148 are able to run your job
From: David Kapukchyan <david.kapukchyan AT email.ucr.edu>
Date: Wednesday, February 18, 2026 at 3:29 PM
To: Ogawa, Akio <akio AT bnl.gov>
Subject: Re: Strange behavior of points and clusters in Run 22 production
Usually I do `condor_q -better-analyze jobid` to check why a job is on hold. Usually it's because "disk quota exceeded" when transferring output. However, I think the real cause is slowdown on Alma NFS IO which condor interprets as "disk quota exceeded". If that is the "hold reason" you are seeing, just do `condor_release` until the job either transfers the output or restarts. It's frustrating but I don't know any other way around it. I do have my own condor job writer and scripts for doing all this but now have to change it to account for this NFS slow down nonsense. I can share it with you once it's done if you like?-David
On Wed, Feb 18, 2026 at 11:26 AM Ogawa, Akio <akio AT bnl.gov> wrote:No directly reading pico dst, not rerunning anything.
https://www.star.bnl.gov/protected/spin/akio/fcs/jpsi/ - original Maker versionhttps://www.star.bnl.gov/protected/spin/akio/fcs/jpsi/#2025Apr - pico dst reading version
Updating it at/star/u/akio/fcstrk10with hopefully better(?) code, and scripts for finding files and submitting jobs. 21 jobs submitted but on “hold” forever… Not sure if I’m using condor correctly...
Akio
From: David Kapukchyan <david.kapukchyan AT email.ucr.edu>
Date: Wednesday, February 18, 2026 at 2:03 PM
To: Ogawa, Akio <akio AT bnl.gov>
Subject: Re: Strange behavior of points and clusters in Run 22 production
Ok thanks for taking a look. I don't call StFcsClusterMaker or StFcsPointMaker when I run because I thought that the FcsCollection would be filled without needing to call these makers. Could that be it? Are you calling the StFcsClusterMaker when you run on the picodsts? Also, what dilepton maker are you working on? I will start looking for J/Psi's soon and I would be curious as to what you have so far. Granted I don't use tracks right now, but I was thinking of using my EPD selection criteria as a first check.
Best,David
On Wed, Feb 18, 2026 at 8:02 AM Ogawa, Akio <akio AT bnl.gov> wrote:At this moment I’m reading pico dst, not mudst.
Pico dst does not have points (yet). So cannot look at that.
As for SigmaMin & Max, I’m getting 0.2~1.0 and 0.4~2.0 and looks reasonable. Not sure why you are getting 0s.
I can switch to muDST at some point… but I’d want to my FCS-Track match and Dileption code working on production. Maybe tomorrow or Friday.
Akio
From: David Kapukchyan <david.kapukchyan AT email.ucr.edu>
Date: Tuesday, February 17, 2026 at 4:39 PM
To: Ogawa, Akio <akio AT bnl.gov>
Subject: Strange behavior of points and clusters in Run 22 production
Hello Akio,
I was looking at the st_physics production for Run 22 and noticed a few strange things which I should've caught before in the st_fwd production (or maybe we talked about it already I don't remember). I am reading StMuFcsCluster and StMuFcsPoint from the mudst and I see that most of the clusters have 2 points. However when I look at the number of points in the parent cluster I get mostly 1. This seems contradictory to me since how can you have most clusters giving 2 points but the number of points in the parent cluster being mostly 1. Also, looking at the sigma min/max plot most of the values are zero (or maybe I have the wrong x and y range?). To me it seems like something is not being correctly written to the mudst file but am curious what your thoughts are.
Best,David
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
Ogawa, Akio, 02/25/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
David Kapukchyan, 02/25/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
Ogawa, Akio, 02/26/2026
- Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production, Akio Ogawa, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
David Kapukchyan, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
Ogawa, Akio, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
David Kapukchyan, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
Akio Ogawa, 02/26/2026
- Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production, David Kapukchyan, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
Akio Ogawa, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
David Kapukchyan, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
Ogawa, Akio, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
Ogawa, Akio, 02/26/2026
-
Re: [[Star-fwd-software-l] ] Strange behavior of points and clusters in Run 22 production,
David Kapukchyan, 02/25/2026
Archive powered by MHonArc 2.6.24.