Skip to Content.
Sympa Menu

sphenix-run-l - [Sphenix-run-l] sPHENIX Shift Change Notes (Tuesday, June 27, 2023)

sphenix-run-l AT lists.bnl.gov

Subject: Commissioning and running of sPHENIX

List archive

Chronological Thread  
  • From: Jamie Nagle <jamie.nagle AT colorado.edu>
  • To: sphenix-run-l AT lists.bnl.gov
  • Subject: [Sphenix-run-l] sPHENIX Shift Change Notes (Tuesday, June 27, 2023)
  • Date: Tue, 27 Jun 2023 20:29:54 -0400

General (Stefan/Kin)

  • RHIC problem being fixed behind current store

  • Tomorrow APEX 0700→2300

  • However, if Linac problem persists, APEX will be cut 8 h short

RHIC intensity still limited by vacuum

Factor 2-3 larger than expected luminosity drop with crossing angle still not understood 

  • Doesn’t look like it’s transverse cooling pushing beam into other RF buckets since also early in store

Current Priority Items (Jamie/Stefan)

  • Fixing online monitoring for MBD

  • MBD calibration to assess crossing angle

  • MBD assessment of shifts in vertex position (STAR sees this (see below))



  • GTM/GL1 commissioning

  • DAQ stability/clean-up

  • Including TPOT in read-out along calorimeters; then INTT

  • Establishing correlations between detector with FELIX read-out and detector with DCM2 read-out

  • Understanding severity of MVTX single-event upsets and possible remedies both on the side of RHIC (collimators, etc.) and on our side (read-out work-arounds)

  • Continue commissioning of TPC

  • Continue commissioning of MVTX

  • Other items

  • Taking ZDC data with zero crossing angle to see single-neutron peak with MBD trigger (and other triggers)

  • Waiting to include ZDC single triggers (hardware)

Work Control Coordinators (Chris/Joel)

  • Status of faulty air handler?

Plan of the Day (Stefan/Anne/all–to be revisited at end of meeting)

  • Continue TPC commissioning with collision rate limited to 100 Hz until 6:00pm
    This is complete 2 hours early,  what is the status for injecting a new store

  • Continue TPOT data taking in parallel

  • Taking calorimeter (including TPOT?) data overnight (prescale to maintain 1.5 kHz rate; ~ 1 M event runs) 

  • Tomorrow APEX (treat detector like for a beam dump)

===============================================================

Evening (Derek Anderson, Sijan Regmi, Justin Bryan)

  • Inherited RA (and magnet on) from day shift

  • RA finished just before 5:30 pm

  • An issue with an RF cavity prevented MCR from injecting until 6:30 pm

Fill 33908 was unexpectedly aborted around 7 pm, but MCR was able to put up a stable store (33909: 111x111, 2 mrad crossing, 8 kHz peak ZDC) just before 8 pm

  • DAQ group used store to continue DAQ development

  • Owl shift arrived a few hours early to get up to speed

Night ()

  • Inherited beam with  run 33909, smooth data taking

  • At 3:30 am, call from MCR to extend the fill due to weather conditions

  • At 4:30 am, call that beam to be dumped, new fill at 5:30  am

  • Issue with  SEB18 -  see e-log entry (cannot open output file), resolved by Mickey

Day (Bill, Charles, Pedro, Athira, Alex, Maya, Stacyann)

  • one SEB (18) required expert reconfiguration. 

  • HCal frequently mis-aligned. One attempt to start/stop with experts present, did not succeed. Experts advise to just run. 

  • TPC, MVTX, and INTT are testing in localmode at 100 Hz. Present plan is to 

keep this fill until 6pm.

===============================================================


Magnet (Kin)

  • Nothing new to report.


MBD (Mickey, Lameck, Alex)

  • Nothing much to report today other than this morning seb18 was down. We tried to reboot but it didn’t work. rcdaq of seb18/MBD was reconfigured  & this fixed seb18. 

Trigger (Dan)


GTM/GL1 (Martin/Dan/John K)

  • Meeting with Joe Mead, fruitful, way to  run  synchronized without new firmware

DAQ (Martin/John H)

  • Working on  2^18 / 2^20 issue where SEB stops

  • SEB18 issue…

  • Question  on using RevTick (setting is there in this firmware)

  • GL1 readout (downgraded machine for a test)

MVTX (Hao-Ren, Yasser)

  • Took several runs to check the FELIX server synchronization

TPC (Evgeny, Charles, Nick, Adi, Thomas Marshall, Takao, Jin)

  • Offline analysis showed we are likely at or within ½ of the target gain on GEM during the field-on collision-on run last Friday. 

  • So last Friday data should be analyzed by the tracking team as first full field collision data. 

  • Working with Chris and Martin on offline event building so it can be passed to tracking 

  • Updated display (Friday run 10931, ADC > 6 sigma of noise) from Thomas Marshall

The next critical item to enable TPC operation is the Debrecen spark protection board for safe GEM operation, which required capturing spark cable signal data on scope to setup the spark protection electronics

  • Monday PM taking no beam HV data, deliberately pushing one GEM to spark, .which captured the large spark signal on scope (~1V p2p, consistent with spark chamber data)

  • Tue noon: beam on 100Hz collision signal on GEM which is 10mV p2p, and well below the spark signal allowing clean ID of spark signal in few us. (not change in vertical scale on scope)

  • Next on the critical path: voltage divider board manufacturing, and installation of spark protection with an access (ETA next week - hopefully on Wednesday July 5) -> nominal beam operation

Online monitoring updates, regular electronics maintenance to clear stuck FEEs

HCal (Shuhang)

  • Cable 0-1 outer seems to be disconnected, no signal from the LED run

  • Potential bug in the online monitoring code giving the false alarm about event mismatch

EMCal (Tim)

  • Started looking at the data taken last night:

  • The workaround fixing the busy issue has fixed the correlation between MBD charge and emcal energy



TPOT (Takao, Hugo)

  • Took some test data in local mode to experiment with SAMPA baseline restoration (BC3) <- works (see plots below) 



  • Reduced pedestal dispersion (both vs channel and vs sample) should allow to run at a lower threshold.

  • Also made a quick test at tail cancellation. <- failed. All data disappeared. Need more work

  • HV Scan with Magnet ON ongoing (will be finished in ~1h) - now completed.

  • Could work on re-including TPOT in big partition + global mode




INTT (Rachid, Maya)

  • Using spare GTM and the inttdaq at the rack room:

  • Raul continues working on clock alignment between Felix boards

INTT is taking data with beam in local mode in parallel with TPC and TPOT.  Checking the size of intt1 packet. 

Today, Raul will work DAQ expert to include INTT in daq script big partition
   

sEPD(Rosi)

  • (Can’t be at meeting) - we should be all set for the July 5th South Side install - discussion will be Thursday at 10 am -> Should be a repeat of the North side

Gas/Cooling ()


ZDC ()

  • ZDC trigger with  singles (going upstairs) and with crossing angle

Background Counters ()


Online Monitoring (Chris)


||------------------------------------------------------------------------------------------
|| James L. Nagle   
|| Professor of Physics, University of Colorado Boulder
|| EMAIL:   jamie.nagle AT colorado.edu
|| SKYPE:  jamie-nagle        
|| WEB:      http://spot.colorado.edu/~naglej 
||------------------------------------------------------------------------------------------


  • [Sphenix-run-l] sPHENIX Shift Change Notes (Tuesday, June 27, 2023), Jamie Nagle, 06/27/2023

Archive powered by MHonArc 2.6.24.

Top of Page