Skip to Content.
Sympa Menu

star-hp-l - Re: [[Star-hp-l] ] [[Starpapers-l] ] Notes for PWGC preview (11/08/2024): Measurement of J/ψ production in Au + Au collisions at 14.6, 17.3, 19.6 and 27 GeV with the STAR experiment

star-hp-l AT lists.bnl.gov

Subject: STAR HardProbes PWG

List archive

Chronological Thread  
  • From: Barbara Trzeciak <barbara.trzeciak AT gmail.com>
  • To: star-hp-l AT lists.bnl.gov
  • Subject: Re: [[Star-hp-l] ] [[Starpapers-l] ] Notes for PWGC preview (11/08/2024): Measurement of J/ψ production in Au + Au collisions at 14.6, 17.3, 19.6 and 27 GeV with the STAR experiment
  • Date: Tue, 12 Nov 2024 15:21:20 +0100

Hi Rongrong.

For now, approach (1) is acceptable, but once we have the official numbers, we would preferably switch to those. The two approaches should be compatible; however, we’ve observed that the uncertainty values quoted in different analyses vary. This is likely due to different analyzers using varying cut variations and because analyses can have different sensitivities to statistical uncertainties. Therefore, a standardized approach, as in (2), would be more consistent.
And yes, the uncertainty values should indeed be dataset-dependent, so having a code that analyzers could run independently remains the ultimate goal.

Cheers,
Barbara

On Tue, Nov 12, 2024 at 3:06 PM Ma, Rongrong <marr AT bnl.gov> wrote:
Hello Barbara

Thanks for your responses. 

It looks like there are two approaches:
1) Use the official uncertainty for no or very loose cuts (2%) no matter what track quality cuts are used in the analysis, and then vary the track quality cuts in the analysis to evaluate additional uncertainty. Also, the net-proton paper uses 2% for all BES-II collider energies. 
2) Use the official uncertainty for the specific track quality cuts used in the analysis (to be released by the task force for some cut combinations), and no need to perform the exercise of varying track quality cuts. For this category, is the idea to have official uncertainty numbers for each dataset rather than a common one for all BES-II datasets, since the performance of the embedding might vary across different energies? Indeed releasing those codes would be helpful since probably not all the track cut combinations will be covered in the officially released uncertainties. 

Are both 1) and 2) acceptable or 1) is good for now but we need to switch to 2) at some point? Have there been any studies to show that 1) and 2) yield comparable uncertainties? 

Thanks. 

Best
Rongrong


On Nov 12, 2024, at 3:09 AM, Barbara Trzeciak <barbara.trzeciak AT gmail.com> wrote:

Hi Rongrong,

Thanks for bringing this up.

We don’t have official numbers yet. My understanding of the net-proton analysis is that they’re using a 2% tracking efficiency uncertainty, which aligns with the most recent recommendations from the tracking uncertainty group for cases with no (or very loose) cuts. They then vary the cuts on top of that. Since we don’t yet have specific numbers for different sets of cuts, this approach should be fine for now.

We've discussed with the tracking uncertainty group the need to release a table of uncertainties for commonly used cut sets, which they should be able to provide relatively quickly. 
Ideally, they will also provide a code so that each analyst can evaluate the uncertainty based on their specific cuts and dataset. Once we have updates from the tracking group, we will distribute them to the collaboration.

Cheers,
Barbara


On Fri, Nov 8, 2024 at 11:20 PM Ma, Rongrong <marr AT bnl.gov> wrote:
Hello Barbara

Thanks for the note. 

As I had to leave early, I do not know if you had a chance to discuss the tracking efficiency uncertainty. According to the BES-II net-proton analysis note (https://drupal.star.bnl.gov/STAR/system/files/Note_ana_netp_BESII_tmp.pdf), the following track quality cuts are used: nHitsFit > 20, DCA < 1 cm. There are no cuts on nHitsDedx. The quoted uncertainty on tracking efficiency is 2%. Is this the official number now? Since this paper is going to be submitted very soon, it will be good to make sure everyone is on the same ground. To that end, is it possible that some official guidances on the tracking efficiency uncertainties for different track quality cuts can be released?

Thanks. 

Best
Rongrong 

On Nov 8, 2024, at 11:18 AM, Barbara Trzeciak <barbara.trzeciak AT gmail.com> wrote:

Date: 11/08/2024

Participants: 
Wei Zhang, Rongrong Ma, Kaifeng Shen, Shuai Yang, Hanna Zbroszczyk, Nu Xu, Zaochen Ye, Zebo Tang, Qian Yang, Richard Seto, Yue Hang Leung, Jae Nam, Xiaoxuan Chu, Guannan Xie, Isaac Mooney, ShinIchi Esumi, Subhash Singha, (Tommy) Chun Yuen Tsang, Sooraj Radhakrishnan, Barbara Trzeciak

Title: Measurement of J/ψ production in Au + Au collisions at 14.6, 17.3, 19.6 and 27 GeV with the STAR experiment
PAs: Rongrong Ma, Kaifeng Shen, Zebo Tang, Shuai Yang, Zaochen Ye, Wangmei Zha, Wei Zhang 
Target journal: PLB

The PWGC panel previewed a paper proposal from HP PWG. The panel found that the analysis is mature and results are important and interesting, and the paper should move forward. The journal choice was also found to be appropriate. The following points were discussed.

Q: Why you stop at 14.6 GeV when you go down in beam energy, are you limited by statistics?
A: Yes, there is no significant J/psi peak.

Q: Do you have systematic unc. associated with the p+p baseline?
A: Yes, we have. It’s based on the cited paper. 

Q: Slide 13, why you don’t have the highest pT point for 17.3 GeV? And why the two other points fall very close to 14.6 GeV.
A: We have only two data points for 17.3 GeV, due to limited statistics. And it’s a log scale, but we can check the yield extraction in these two energies. 

Q: Why you have N_coll uncertainties, not N_part?
A: It’s in the R_AA calculation. And N_part is on the x-axis

Q: Do we have model calculations for pT dependence? Because CNM can be higher at lower energies. 
A: Not for the lower energies. 

Q: What are CNM effects here? Are there existing pA data at these energies from other experiments from SPS or Fermilab?
A: There are SPS data, but results are not conclusive. In SPS they use Drell-Yan as a reference, and they measure modification as a system size. We can have closer look at the existing results but interpretation is not straightforward.  

Q: Fig.5: what is the dominant source of sys. unc.?
A: They include Au+Au and p+p uncertainties; p+p uncertainties dominate and the tracking efficiency unc. We can also check less strict DCA cut that should reduce the tracking uncertainty. 

Q: Fig.5: why do you use different symbols for the previous RHIC results, are these STAR results?
A: 54.4 GeV results are not published yet, and the other Au+Au results are published. 

Q: Fig.4.: can you see a step, as seen at SPS, if you go in more fine N_part binning?
A: We haven’t tried more fine centrality binning, we can try for the two higher energies. 

Q: Fig.2,3: It would be good to include other energies that we have.
A: We will try to add.

Q: Fig.3: what is R_AA’?
A: It’s because at lower energies there are no p+p data.
Q: Suggestion is not to do this, just state it in the figure’s caption.

Q: Fig.4: add more N_part numbers.
A: We will do.

Q: Are the bands around 1 in Fig. 4 included in Fig. 5?
A: Yes, everything is included. 

Q: Fig.5: suggest to remove collision energies from the legend.
A: We will do.

Q: Fig. 5: 0-20% covers up to which energy?
A: Up to 200 GeV.

Q: Considering all the unc. can we say that the model underpredicts the data?
A: We have in mind only the lower energies. But we can calculate how many sigma away the model is. 

Q: Are there CNM effects included in the model?
A: They should be included in primordial curve.

Q: Fig. 7: do we know what are the rapidity distributions, does the result depends on this extrapolation?
A: It is carefully checked in the cited paper. 







Archive powered by MHonArc 2.6.24.

Top of Page