Skip to Content.
Sympa Menu

sphenix-run-l - [Sphenix-run-l] sPHENIX Shift Change Notes, Thursday, June 29, 2023

sphenix-run-l AT lists.bnl.gov

Subject: Commissioning and running of sPHENIX

List archive

Chronological Thread  
  • From: Jamie Nagle <jamie.nagle AT colorado.edu>
  • To: sphenix-run-l AT lists.bnl.gov
  • Subject: [Sphenix-run-l] sPHENIX Shift Change Notes, Thursday, June 29, 2023
  • Date: Thu, 29 Jun 2023 16:28:17 -0400

Thursday, June 29, 2023


General (Stefan/Kin)


Determined:  
Controlled Access at 9 am on Friday, June 30 for  2 hours.

Then 4 hours of local mode availability - nominally  11 am  - 3 pm

Then getting  Big Partition into a working mode for the evening…


Current Priority Items (PC)



Work Control Coordinators (Chris/Joel)

  • Chris - Access next Wednesday, July  5, 2023

  • • Frank and team open South pole tip doors ~ 7 am

  • • South petals “bike rack” are brought into IR and rigged onto scaffold

  • • After rigging/pole tip door opening, IR crane will be locked out (complex LOTO)

  • • AC shop arriving around 8:45 am

  • • CAD carpenter will install South diving board and assemble bakers scaffold
    Any MVTX cable check needs  to  be before  10 AM

  • • Bakers scaffold will require inspection (Gaffney) before 10AM

  • • sPHENIX team/sEPD team will install petals and reroute fibers

  • • After AC work, crane will be energized 

  • • CAD carpenter will disassemble Bakers scaffold and remove diving board

  • • South pole tip doors will be closed

Plan of the Day (Stefan/PC/all–to be revisited at end of meeting)


  • Completed MVTX background test with 0 crossing, then drop yellow (blue-only)

  • Also took data with MBD, ZDC, INTT with blue beam only and north side only triggers and then south side only triggers

  • Would be good to see results and that both setups worked before
    doing yellow-only test…

  • C-AD now doing 12x12 with a  bump test (short in duration) for STAR backgrounds.

  • Next store we want zero  crossing  angle (dedicated ZDC data taking w/ MBD …) and then steer into 2 mrad crossing angle (comparison ZDC data)


  • Provide close-to-real-time info on background (from MBD) close to beam pipe to CAD

  • Provide close-to-real-time MBD z vertex to CAD to enable continuous correction of z-vertex (John H, Mickey)

  • What is the schedule for INTT z-vertex calculation?

  • Useful to resurrect RunControl psql / web interface & OnlineMonitor logging of plots

  • RevTick usage turned on  in GL1/GTM → need to confirm that this works

  • GL1 readout with Big Partition

  • Need DAQ automated response when taking in / out different systems in terms of busy/endats/etc.


  • Requests for local mode running

→ ADC electronics checks towards getting more of EMCal into Big Partition

→ HCal control cable swap check

→ TPOT re-check voltage scan for efficiency, ENDDAT scan, etc.

  • Big Partition global mode checking / debugging / features

  • 2 hour access for Friday

→ TPC spark test installation

→ HCal cable check and potential fix

→ Reprogram in place of _one_ =EMCal ADC board (Dan)

→ ? others



=======================================================

Evening (Brett Fadem [SL], Aditya Prasad Dash [Daq Op], Lameck Mwibanda [Data Mon], Zhongling Ji [Det Op])


  • Apex running at start of evening shift

  • Original plan was to get a physics store by  ~8:30 pm. We got our first stable physics store 10:15 pm

  • Expected to end at 5:00 AM

DAQ experts worked on adding systems: INTT, TPOT, etc

HCAL experts performed an LED test


Night (YA, HEnyo,Abudulla) 

  • DAQ development

  • 0:30 start taking data with HCAL, EMCAL(w/o seb1,5), ZDC, MBD, TPOT, and INTT (w/o intt1)

  • run20353 . It runs for  > 1 hour and ~1.3M events. No issue. DAQ rate is ~380Hz, limited by TPOT

  • Run20354. Again runs for > 1hour. No problem

  • Problem with INTT started. 3 intts producing many empty packets. Martin removed them from Big Partition.

  •  ~2:40 run20358 started. HCAL, EMCAL(w/o seb1 and 5), ZDC, MBD, TPOT, INTT (w/o intt1,2,5,6)

  • But 3 intts started producing empty packets. Continue runs.

  • Runs > 2M events

Problem of intt prevented start new run

4:25 RHIC siren of beam dump …turn off detectors

Beam dump — issues with BERT

5:10 new fill  33919. Scheduled to dump @ 1310

Could not start run. Intt disabled. Ebcd disabled but still dont run

7:50 Called Martin. He is investigating.

 

Day (athira, maya, stacyann, bill)


  • Arrived on shift to a filled machine, but with residual DAQ/TRG issues. Thanks to Martin and John H, we were running again at 11:30am. 


  • Detectors brought down at 1pm to prepare for single beam tests. Beams were first steered to zero crossing (increasing zdc coin rate by a factor of 17/3), then yellow beam was dropped. The single-beam testing then began.


  • minor issues:

  • Two TPOT HV channels tripped. Fixed quickly by D.O.

  • INTT HV FILLERN-B reset by D.O.

  • Modifications to HCal voltages by Hanpu and team. 

  • D.O.'s station CONTROL1 froze. Rebooted. All detector GUIs then came back up easily, except MBD HV. Mickey fixed it - thanks!


======================================================


Magnet (Kin)

  • Nothing new to report.

MBD (Mickey, Lameck, Abdull, Stacyann)

  • Nothing new to update.

Trigger (Dan)

  • Single North and South triggers used on MVTX background studies.

  • Integration of multiple triggers next wednesday July 5th.

  • Still 2 Hit requirement on MBD trigger will be brought back after ZDC special run.

GTM/GL1 (Martin/Dan/John K)

  • After the actual GL1 readout was fixed, now we found an issue with the gtm (main unit) not sending data. We will likely need a reboot. I had already done everything that Joe suggested I should do.  Request that I can reboot the gtm at a convenient time

DAQ (Martin/John H/ Jaebeom)


MVTX (JoS, Hao-Ren, Yasser)

  • Took data with 0-crossing angle, blue beam only:

  • Three 5-minute runs with the settings similar as previous runs

  • A few runs with modified chip configurations, including a mode where “large” events are truncated on the chip

Error occurrence qualitatively did not change compared to 2mr crossing angle with standard chip parameters

“Truncated event” setting helped, but still had 2 staves give errors (L0_00, L1_00), both on the West side horizontal plane (?)

Had several issues with rcdaq on mvtx-flx1:  slow response or crashing, not sure what this was caused by; machine “felt” kind of “sluggish” (?) in realtime response

Noticed that the run  number file in the phnxrc file had run number set to 1 when we started (?)


TPC (Tom Hemmick, Jin, Takao, Evgeny, Charles, Thomas, Adi, David, Tamasz)

  • Preparing for 2 hour Access Tomorrow 06/30

  • Installing Spark Monitor Box in TPC HV rack (sPHENIX top)

  • Time TBD - As Stefan said, between fills

  • Added to Access List, earliest we would be ready is 9 AM

  • Nominally Takao, Evgeny, David (Charles around if needed)

  • Evgeny coordinated with Frank Toldo (involves turning off the rack) and Martin Purschke (IP for the spark monitor PC is obtained)

Wednesday 07/05 Access:

  • Bob Azmoun wants to:

  • Re-install trigger board + 5 laser heads (keeping 1 for lab)

  • Possibly install new fanout board (if ready by 07/05)

Same duration of time (~ 2 hours)

Wednesday 07/05 (additionally):

  • Want another diffuse laser test during the maintenance day

  • After restoring laser heads + trigger board

Probably need 5-6 hours again (as last week)

Checking if Full Magnetic Field is ok  → magnetic field has to be off for this access day

Probably want GTM in local mode - will confirm

May keep GEM HV just low enough to read out laser flash - won’t push

We understand sEPD South installation takes priority (any other bore work?)

Data Analysis Continues (Hits - Adi , Clusters - Evgeny):


  • Jin started monitoring run yesterday


HCal (Shuhang)

  • The bias voltage for the ohcal channel mapping test was still on in some runs(and the rest of runs over night doesn’t have the channel by channel voltage correction) from last night, but that confirmed the cable swap was done correctly

  • Need to look at the script to figure out why this didn’t get reset.

Plan to take LED data to confirm the cables for the slow control is correct

Added button on bias gui so shift crew can recover trips

EMCal ()

  • Would like to get in to the south bore during the access wednesday (before sEPD install) to extract 2 humidity probes. It should take about 10 min. 

  • Pushing PR for online monitoring this afternoon

TPOT (Hugo)

  • Successful data taking over night. (global mode + big partition). At 5:30, one of the FEE got “stuck” in sending a large amount of data without triggers. Had to be power-cycled and reinitialized, after which TPOT was working again. Note: we use large ENDAT value to prevent triggers to arrive too close to each other. This effectively limits the trigger rate to 400Hz.


INTT (Maya)

  • Raul continues working on intt0 empty data issue and clock alignment between Felix boards.

sEPD(Rosi)

  • South side install (see Chris’s explanation for crane issues)

  • All SiPM box elements have shipped (overnight, should arrive tomorrow)


Gas/Cooling (Kin)

  • Our buyer is waiting for a confirmation that the vendor may deliver a couple CF4 bottles (partial delivery) tomorrow.

ZDC (Peter)


Background Counters (John H)

  • Blue beam only gives background in north background counters



Online Monitoring (Chris)

  • Working on web access to plot  history

||------------------------------------------------------------------------------------------
|| James L. Nagle   
|| Professor of Physics, University of Colorado Boulder
|| EMAIL:   jamie.nagle AT colorado.edu
|| SKYPE:  jamie-nagle        
|| WEB:      http://spot.colorado.edu/~naglej 
||------------------------------------------------------------------------------------------


  • [Sphenix-run-l] sPHENIX Shift Change Notes, Thursday, June 29, 2023, Jamie Nagle, 06/29/2023

Archive powered by MHonArc 2.6.24.

Top of Page