Skip to Content.
Sympa Menu

sphenix-l - [Sphenix-l] Day-1 Physics Discussion - post-meeting discussion notes

sphenix-l AT lists.bnl.gov

Subject: sPHENIX is a new detector at RHIC.

List archive

Chronological Thread  
  • From: "Perepelitsa, Dennis" <dvp AT bnl.gov>
  • To: sPHENIX-l <sphenix-l AT lists.bnl.gov>
  • Subject: [Sphenix-l] Day-1 Physics Discussion - post-meeting discussion notes
  • Date: Wed, 31 Aug 2022 18:09:38 +0000

Hi all,

Thank you to the many participants for the lively discussion in last week’s “Day-1 Physics Discussion” meeting on 26 August 2022.

We have prepared a set of notes which summarizes some of the main discussion points and follow-up items. They are posted on the original Indico page - https://indico.bnl.gov/event/16728/ - and also reproduced below.

Dennis and Anne



sPHENIX Physics Discussion
26 August 2022
https://indico.bnl.gov/event/16728/ 

51 participants at peak 

These minutes do not try to reproduce the speaker slides but to summarize key discussion points.

HF topical group:
  • b->D0 v2
  • D0 v1:
  • Needs SMD to be read out
HF jet v2
Gunther brings up the coupling of these HF measurements to other measurements
  • E.g.  does it make sense to have b-jet v2 before inclusive hadron v2
  • Do simple reference measurements (inclusive charged hadron v2) need to be published or only checked internally? Which ones are needed to build confidence? 
  • Answer: For all these measurements the “related analyses” are important
Chris: suggestion to specify for all measurements which level of the TPC space-charge distortion corrections you need.
  • For the HF measurements, E-by-E corrections might not be needed. Will use simulations to understand the impact.
How low in luminosity can these measurements go?
  • Answer: They are focused on “high-impact” new results which need luminosity
  • Gunther: If the measurement is new at RHIC, a “proof-of-principle” measurement can make sense even with low luminosity. Follow up papers with higher stats.
Ross: TPC resolution is not flat in eta in the early data

Quarkonia:
  • J/psi v2 @ high-pt
  • Need all the tracking calibrations
  • EMCal pedestal subtraction/coarse energy calibration
  • A measurement should be possible with something like 1/nb of calibrated data
Y(2S)/Y(1S) ratio in AuAu
  • About the same information necessary as for the J/psi v2.
Dennis: do you need inner HCal information?
  • It will help at the 10% level and you could do ML for final analyses but not necessary for Day 1 measurements.
  • “EMCal pedestals” include UE & electronics pedestal.
Gunther: As soon as we can get to the upsilon mass resolution target (100 MeV), we should write some paper about it.
  • Marzia also thinks you can probably do a upsilon v2 with the data samples she’s talking about too. (with large uncertainties)
Anne: questions about the MC samples:
  • Answer: embedded samples into MC are fairly small.  Sasha L. usually runs them.
  • Data embedding hadn’t been thought about and probably not needed for first measurements

Jet Structure
  • Dijet asymmetry, jet v2 & photon/eta spectra
  • Dijets & jet vn are high impact and provide cancellation of uncertainties.
  • Dominant uncertainties are from the unfolding
  • Dennis: Do you really need 4-5/nb?
  • Answer: No, you can probably do it with less. However, useful to have a high leading jet pT selection and low sub-leading threshold (say 3x) to measure a large xJ range as at LHC.
EMCal particle spectra - many potential advantages 
  • Gunther: Where do high-pT spectra live? I guess JS TG. High-pT hadron effort more broadly should start ASAP - needed for many other contexts. 
  • Justin: eta nice for an early demonstration of the pT reach 
Data embedding
  • Anne: particularly important to use data embedding for jets
  • Gunther: long-regretted inability to embed in PbPb in CMS - should really push in sPHENIX
  • Dennis: check with simulations TG - does data to be used for embedding need to be read out in some special way (e.g. no ZS)? Tim, Jin: agree

Bulk TG
  • dN/deta = Data-driven corrections, need very few events
  • Joe: this measurement may be harder than we think without INTT due to large pileup rate - otherwise low pileup would really be needed 
  • Tony: agree, much easier if we can include INTT 
  • Gunther: specific plan is to do this with very low inst luminosity 
Chris: how to deal with in-time pileup, which can be of order 0.5%? (note: in the meeting, a larger value was quoted, but we believe this was related to the OOT pileup seen by detectors with large integration time)
  • Possibility to simply discard highest luminosity running (early in store) for cases where this is a problem, which is fine for certain analyses
  • Dennis: is it then important to have the ZDC to disambiguate this E-by-E?
  • Rosi: how complete is the treatment of pileup in simulation? Ejiro: have some steps towards, will discuss in future meetings 
Ming: can we do crude HF jet measurements using just the MVTX for displaced vertex + the calorimeter for the jet energy measurement? 
  • Jin: we really want full tracking to reject background 
Multi-particle cumulants
  • Dennis: how competitive are these with existing RHIC or LHC? 
  • Answer: With one billion events, already better than STAR in existing measurements, and even some new measurements at RHIC 


Commissioning
  • Commissioning slides include the sub-systems and details for each of the measurements
  • Ross: why will have the magnet on only in the presence of beam? Answer: Availability of cryo. 
  • Followup from Ross: For the TPC static distortion map, it would be helpful to have any opportunity to have the magnet on earlier… (ExB is a big contributor). It would be nice to determine just the static distortions by themselves without actual space charge as an additional contributor.
  • Gunther, Martin: How long would this take? Several hours. Maybe we can find an opportunity for this… Chris P written comment: there are maintenance days 
Gunther: we should not think of week 14 as the “start of physics”. Early analysis of “physics-like” quantities provides a sanity check of the data. For example, a dN/deta measurement would probably use data from week ~5 of the schedule on the last slide. Other good candidates: event plane, ET measurements, etc.
Tony: 1) We will extract the reference geometry from the GEANT model, and all alignment constants will be relative to that. 2) We will need some down time (days) just for alignment 

Calibrations - TPC 
  • Jin: some analyses don’t need final precision, but good efficiency. At which step of TPC calibration are we comfortable with tracking efficiency? 
  • Tony: Static and average TPC corrections -> align adequately for efficiency. If TPC internal alignment at 100 microns or even worse but only a bit, all tracking reconstruction pieces work. 
  • Ross: in fact, average correction needs full track reconstruction (or something sufficiently like it) to work in the first place 

Calibrations - Calorimeter 
  • Dennis: for HCal, is the only use for collision data in the isolated hadrons? Yes. Preference is to use MIPs, and we won’t have gamma+jet, dijet in p+p yet

Computing 
  • Joe: working on auto-running of registered analysis modules on all new MC production.
  • Tony: the error (missing tracks in DSTs) was not caught since the tracking experts re-run at a very low level, and only casual end users use the final tracks as they come directly out of the DST
  • Offline clarification from Joe: The tracking group will use this to monitor the tracking performance and it will be low statistics for any sort of rare probe (e.g. we will just embed 40 tracks in 1000 events or something like this). We can in principle also produce the output for user analyses if people have an analysis that fits into some box like this. For example, we could stick an upsilon in the sample too to monitor upsilon resolution and tack on user analysis modules at the end which “clean up" the signal. 
Gunther: analysis needs to be fully concurrent with data-taking / reconstruction for success, not just afterwards. Would rather lose some data (!) than take data without fast analysis. Have to have working system to be looking at data in real time 
  • Chris: needs optimization/planning/discussion. For example: could make decision to process 100% of calorimeter of data first 
  • Gunther: explore possibility to ship data offsite to do analysis without interfering with BNL resources. Chris: some decisions we’ve made support this usage (panda, etc.)

Simulations 
  • Tony: what did you mean by alignment? 
  • Answer: Basically “finite” (i.e. mis-) alignment. Consistent mapping off of dead/misbehaving channels, including as a function of time. To be discussed offline




Archive powered by MHonArc 2.6.24.

Top of Page