Skip to Content.
Sympa Menu

sphenix-run-l - [Sphenix-run-l] minutes shift change meeting, 06/01

sphenix-run-l AT lists.bnl.gov

Subject: Commissioning and running of sPHENIX

List archive

Chronological Thread  
  • From: Stefan Bathe <stefan.bathe AT baruch.cuny.edu>
  • To: sphenix-run-l AT lists.bnl.gov
  • Subject: [Sphenix-run-l] minutes shift change meeting, 06/01
  • Date: Thu, 1 Jun 2023 16:37:46 -0400

rGeneral (Stefan)

  • Not all RF cavities fixed at yesterday’s maintenance

  • Stochastic cooling (longitudinal) being commissioned today

  • Should lead to much longer stores (~ 6 h with 56x56 bunches)

Work Control Coordinators ()

  • Chris Pontieri

  • No Updates

Plan of the Day ()

  • Include as many subsystem as possible in the big partition and take data

  • Include INTT in Big after SSM

  • Debug HCal timing

  • TPOT scan

  • Beam-off for TPC 9pm-2am, Sa, Su

Evening (Yeonju)

  • Inherited no beam, RA 

  • Magnet reached full-field at 5:10pm 

  • Physics beam 58x58 at ~9pm, ran DAQ with MBD+LL1+HCAL in local mode, lasted ~2 hours because of the de-bunching issue

  • Another physics beam 28x28 at 11:30pm, ran DAQ with MBD+LL1+HCAL in global mode 

Night (Murad)

  • Inherited beam from evening shift (fill#33824 28x28) with magnet on

  • Joey started the DAQ with LL1+MBD+HCal in global mode.

  • Martin tried to get the EMCal into the big partition. He managed to run EMCal in own partition and collected 170K events with magnet on

  • After Martin left we had DAQ problems that lasted until the end of the fill

  • Fill#33824 lasted ~6 hours (11:30 pm – 5:20 am)

  • 6:44 am: new fill #33825 28x28   ZDC coin @ 10kHz w/ magnet on

  • Dave managed to get the DAQ running again but with LL1+MBD only.

Day (Silas)

  • Recovered HCAL at 8:10 by clearing busy

  • Inherited beam 28x28

  • Had beam dump at 9:31 due to STAR magnet issue

  • Ran with EMCAL, HCAL, MBD and LL1, however timing issue meant no usable data for HCal

  • New Physics beam 56x56 at ~12:43

  • No new data as we have been hunting down timing issues/resolving busys

  • Got triggers at most at 1/10 ZDC coin rate (~200-400 Hz)

MBD (Alex)

  • Increased HV by ~100V to compensate for drop due to B-field

  • MBD-LL1 rate was low.  Timing distribution on remote scope looked off from what it had been, and TDC distributions from morning data looked late.  Re-ran finedelay timing scan, changed setting from 90 to 0 (at around 2:30pm today), and the MBD-LL1 rate seems about normal. Likely this is due to the new B-field on configuration.

ZDC (John Haggerty)

  • Cleaned up the ZDC rack and cabling in 1008B; almost complete, but NIM bin didn’t come on

  • Put in fibers to crate controller, XMIT, and clock master



Background Counters (John Haggerty)

  • Checked that signals from south arrive in 1008B on the scope; controls group working to add scaler channels

Trigger (Dan)

  • Analyzing Ll1 data collected last night; still on lemo

  • Devote time to switch to fiber input

Devote time to trigger on ZDC coincidences→ requires changing timing

DAQ (Martin)

  • Work last night on the EmCal integration (ran Emcal as its own partition to take the kinks out of the config scripts (major cleanup here). That was then later integrated into the BP by Joey, Dan, Mickey, and others. Today saw a bit of cleanup in the HCal arena (great setup for LL1 & MBD), and we are in pretty good shape with a suggested template for namings etc for others.

  • After SSM:  include INTT in big partition

The 2nd “full” (== identical hardware twin of “gtm.sphenix.bnl.gov”) GL1/GTM unit was delivered while I was away. The network cage behind the tech shop that got closed at the time of the IRR is open again. The 2nd RHIC clock fiber path is now in place (minus one 6ft patch fiber that arrives tomorrow). That will allow us to segregate select systems to do their thing while the BP continues, and also to test changes before we deploy them in the main unit

Some updates to the GL1/GTM remote APIs to accommodate scaler reads, DB dumps, that sort of thing, also to load LUTs etc.

Some work to implement some of what we learned from Chi yesterday. (Emcal last night ran with those updates).

Setting up more machines for a dedicated ZDC readout node, etc.

Online Monitoring (Chris)

  • Ready for testing

  • Works on SEBs and ECDBs

  • Will try with MBD

  • Need monitor, start with overhead monitors (maybe tomorrow)

HCal (Silas, Virginia)

  • Took runs today in big partition with EMCal, MBD, Ll1

  • Timing of HCal looks weird 

  • EMCal timing looks fine- timing was basically the same between EMCal and HCal last week but now it doesn’t seem to be

EMCal (Anthony)

  • Working on running in big partition

  • Running much more consistently, but still issues to debug

  • Timing still seems good relative to adjustments to LL1 and MBD

  • Looking to take data at 100Hz in big partition after HCal successfully re-timed in.


TPOT (Bade)

  • At 8:30pm the magnet had been on happily for approximately 3 hours. We decided that this was sufficiently stable to ramp up TPOT HV. All detectors were safely ramped up to first safe, then to operating voltage. 

  • TPOT resumed regular voltage ramping operations. 

  • We took data in the Magnet On +  Beam On + Local DAQ Mode with 300V drift voltage on SCIP and SCIZ. Unfortunately, the beam had to be dumped a couple minutes later due to severe debunching. By the time we had physics again, DAQ experts requested a switch to the global mode around midnight, so there was no more data taking. 

  • The plan is to proceed with a drift voltage sweep to find the optimal gain and operating drift.


TPC (Tom, Takao, John K., Luke, Thomas, Adi, Evgeny, Charles, Christal, Jin)

  • Tom: Look for opportunities for another cycle of GEM conditioning with no beam

  • Thomas, Adi: With the first monitoring run after GEM reaching operational HV, we can see many particles wandering around in the TPC (random triggered cosmic, most appears to be low energy particles after cosmic-HCal interaction?). 

John K: getting ready for the next fleet-wide firmware-driver-unitility updates

  • Christal Martin: confirmed new firmware at EBDC16 fixed the checksum error over large dataset

  • Code tagged, CI-built and installed on 1/24 server

  • Will require some local-GTM time to test the new firmware deployment

Prefer to join big partition running after the above DAQ upgrade

Need chiller

INTT (Rachid)

  • Request if possible: integrate INTT in the big partition 🙂

MVTX (Zhaozhong)

  • Run discussion today.

  • Decoder discussion with Martin

  • No protocol to readout all 6 FELIX with RCDAQ right now

  • Allowed to modify GTM 4 for mvtx only

  • Time in all 6 MVTX FELIX serves with the GTM 

Grafana development ongoing 

  • Time off by 4 hours, possibly due to time frame issue 

  • Missing data points (demo data points, not all data are included)

  • Integrate flow rate for grafana

Why do we want data with B field on?

Still low DAQ 2 cooling rate (6.3 L/min) 

  • ​Can turn on the RU anyway with low flow

  • Will NOT take run today afternoon. instead just test the electronics

  • Tube 15R on DAQ 2 cooling is leaking, tried to replace barrel connectors, not yet fixed. Leak not from the connectors from RU to the tube at the back of 2E1 


sEPD()


Gas/Cooling ()

  • TPC chiller


Magnet (Kin)

  • This plot shows the magnet currents (bottom) and the temperature rises (bottom, ~2 K) after I did fast-discharge at ~1000 A and ~900 A (partly testing the tape sequence and switches of J. Morris of C-AD).

( the red vertical line in the bottom plot is not real. )

  • The machine specialist in C-AD Vincent Shoefer (for yesterday’s tuning RHIC for sPHENIX Magnet experience):
    “The first injection (with no corrections) already circulated pretty well.  The orbit and coupling effects aren't negligible, but they don't push the beam into a bad configuration. The tune shifts were tiny (which I don't understand), but that certainly helps.  The effects are no worse than the STAR or PHENIX magnets.  Proton injection is at a similar rigidity, so I would expect something similar next year.”



---------------------------------------------------------------------------------
Stefan Bathe
Professor of Physics
Baruch College, CUNY
and RIKEN Visiting Scientist

Baruch:                                     BNL:
17 Lexington Ave                      Bldg. 510
office 940                                  office 2-229
phone 646-660-6272                phone 631-344-8490
----------------------------------------------------------------------------------





  • [Sphenix-run-l] minutes shift change meeting, 06/01, Stefan Bathe, 06/01/2023

Archive powered by MHonArc 2.6.24.

Top of Page