Skip to Content.
Sympa Menu

sphenix-run-l - [Sphenix-run-l] sPHENIX Shift Change Notes (Friday, June 30, 2023)

sphenix-run-l AT lists.bnl.gov

Subject: Commissioning and running of sPHENIX

List archive

Chronological Thread  
  • From: Jamie Nagle <jamie.nagle AT colorado.edu>
  • To: sphenix-run-l AT lists.bnl.gov
  • Subject: [Sphenix-run-l] sPHENIX Shift Change Notes (Friday, June 30, 2023)
  • Date: Fri, 30 Jun 2023 16:38:04 -0400

Friday, June 30, 2023


General (Stefan/Kin)


Current Priority Items (PC)

  • Need collaborators to sign up for owl shifts (see the sign up)...

Work Control Coordinators (Chris/Joel)


Plan of the Day (Stefan/PC/all–to be revisited at end of meeting)

  • How to  run stably and productively  over the weekend….

  • Local mode now - how much longer?  
    [Getting more EMCal SEBs into Big Partition, other?]

  • Procedure for shift crews to log data taking, and any MBD timing shift (RevTick related)

  • What development work might be done Saturday OR Sunday?


========================================================================

Evening (Brett Fadem [SL], , Aditya Prasad Dash [Daq Op], Lameck Mwibanda [Data Mon], Zhongling Ji [Det Op])

  • MCR provided fill for STAR tests at beginning of shift

  • Subsequent Fill allowed ZDC experts to take data with beam crossing at 0 deg., then 2 mrad.

  • Once ZDC tests were complete, we tried to run with INTT in the big partition but encountered difficulties, so we went back into local mode as the DAQ and INTT experts attempted to fix the problem. 

  • Plan was for the owl shifters to take data in global mode as best they could but not to call the DAQ experts during the owl shift.

Night (YA, H.Enyo, Abdullah)

  • DAQ expert (Jaebeom) and INTT expert (Raul, Jaein) working

  • Martin fixed GTM issue of INTT

  • Test of new firmware of INTT is successful. Firmware of all 8 intt servers are updated

  • 1:30 beam dump.

  • 2:23 new fill 33922   111x111

  • Run20444 with EMCAL, HCAL, ZDC, MBD, TPOT, and INTT(all), Ll1, Gl1.

  • Runs 1 hour. 1.3M events. No issue

  • New firmware of INTT works. No more triggerless mode problem

Continue to take runs. 1 hour each, 1.3M events. No issues

BERT issue…fixed

5 runs total. 1.3Mx5 events


Day (athira, maya, stacyann, bill)

  • Started shift running. Dropped beam at ~9am for the access.

  • Access work:

  • Evgeny, Takao, Charles, David, and Jeff installed TPC spark detector.

  • Virginia and Hanpu swapped an HCal slow controls cable. Confirmed issue is fixed.

  • Dan replaced EMCal ADC board in rack 1W1, removed board was activation checked.

  • Tim and Sean worked on an EMCal rack.

Area swept and closed at ~12:45. Physics 13:43. Dump scheduled 21:43. We are in localmode. 

Numerous Tracetek alarms (AH cable #4, near chiller platform B) and IR Humidity alarms caused by open AH door (riggers are working) allowing wet air into the AH and IR. Rigging work completed and AH door closed again by 14:35.

=======================================================================

Magnet (Kin)

  • Nothing new to report.
    Will be turned off for Wednesday access.    Want to  then leave on just after the access for the 5-6 hour TPC diffuse laser test without beam.

MBD (Mickey, Lameck)

  • The MBD HV GUI can’t start after reboot. It happened when the computer restarted everything.

  • .The MBD HV GUI uses the flock() mechanism to ensure that only one process is talking to the HV MF at a time. Now the  group write permission for sphenix3 is added which means anyone in the group sphenix3, including user sphenix-slow, can use that lock file, which is fixed, now with good operation.

  • Also the other thing about watching whether the rev-tick is in, & if we are seeing that the MBD phase delay doesn’t need to be changed anymore.

  • The lock file used is /tmp/mbdhv.lock. 

  • To add group sphenix3, use the command chmod g+w /tmp/mbdhv.lock


Trigger (Dan)

  • Running MBD trigger currently… continuing to test calorimeter trigger firmware.

GTM/GL1 (Martin/Dan/John K)


DAQ (Martin/John H)

  • Running ADC readback and DCM2 readback tests in local mode. Will finish before night ends (DAN).
    (Martin is on vacation and offline today) - some highlights:

  • GL1 logging solved, and first runs taken yesterday with GL1 info in the partition; GL1 now a firm member. GL1 (the gl1daq host, that is) should be the first in the hostlist for run control since this is the one that’s consulted for the main parameter updates (like the numbers you see on the GUI)

  • The fiducial tick is now routinely enabled; I’m not aware of a data point if that stabilizes the timing between fills or not.

  • My summer interns are making progress on a few projects- 

  • making the RunControl interaction more palatable for the shift crews, more features, more on-demand info, no more or much less frequent restarts

  • DAQ and Trigger online monitoring under the POMS umbrella

Working on improving our record keeping (think run database).

Generally much smoother DAQ running than 10 days ago still with even the shift crews noting that in the elog.


MVTX (Hao-Ren)

  • Quantitative study/analysis for the beam-background, need more time for more detailed study/analysis…

TPC (Tom Hemmick, Jin, Takao, Evgeny, Charles, David, Tamas, Jeff, Aaron, Nick)

  • Today’s 06.30 Access:

  • Installed Spark Monitor Box in HV crate:

  • Thanks to Aaron and Jeff for helping

  • Thanks to Kin/BNL Safety Engineer/Tamas/David for clearing up compliance issues 

  • Installation was a success! 4 Voltage Divider Cards in there now:



  • Wednesday's 07/05 Access:

  • Bob re-installing 5 laser heads + trigger board during access

  • Requesting 5-6 hours “NO BEAM” after closure for Spark Monitor Test + Diffuse Laser Test

  • First do Spark Monitor Test, then follow up with Diffuse Laser Test

  • Since Magnetic Field already ramped down, we request it stay down until end of test

  • Priority goes to sEPD or other subsystems that need to do bore work

Monitoring Run for TPC ongoing

  • Fine to end if we need to move to global mode

  • More OnlMon development/testing


HCal ()

  • LED run showed that the slow control cable for ohcal sector 0 was swapped, fixed during the access.

EMCal ()

  • Went in during access this morning to investigate pedestal noise sources. Found ~15kHz power supply “switching” noise on some signal cables. Will consider adding additional capacitor filtering to +/- 6V power supply output.

  • Will use access wednesday to swap out some humidity probes before sEPD installation


TPOT ()

During TPC access this morning, the TPC (and TPOT) HV Rack was switched OFF while TPOT HV was in SAFE mode, not OFF. This is potentially very risky to the equipment.. It means that the detector was turned OFF without a proper ramp. 

One must make sure that everything is OFF when turning OFF a crate

Apparently the detector did not get damaged when this happened. 

(in the picture: TPOT HV Crate is turned OFF at 11:37 while there is still HV applied to the detector. It is turned back ON at 12:05, with all HV now at zero. Then ramped up back to safe). 

More access planning is needed to make sure that this does not happen again …


Also: repeating standing orders: 

  • TPOT must be turned ON by the shift crew whenever physics is declared, (and unless otherwise instructed), and back to SAFE before the beam is dumped (or after in case of accidental dump). 

  • TPOT HV trips must be recovered as early as possible by the shift crew by pressing the “recover trips” button. No need to call the expert. (there was an instance yesterday during which a tripped channel was kept off for about 2h before being recovered)




INTT (Rachid, Maya)

  • Intt-1 (Felix-1) issue was fixed last night. The issue was related to the instability of the Slow control at the ROC. Using beam data of this morning, the intt1 adc distribution data look good. 

  • The focus now is, need to take beam data with INTT detector:

  • check the timing again by looking to hits in each Felix

  • synchronization of the 8 Felix boards.


sEPD(Rosi)

  • No further update (South disk install July 5th)

Gas/Cooling (Kin)

  • From the vendor contact (via BNL buyer): “I just found out from Airgas Supply Team that you will be getting all 10 remaining on the Order on Monday.  Airgas New Jersey Plant will send via Truck to Islandia Branch over the weekend.  Islandia Branch Operations Manager - Farooq confirmed that BNL will receive their delivery on Monday. “

ZDC (Ejiro/Peter/John H)

  • A ZDC dedicated data taking occurred yesterday to check the large difference in the location of the single neutron peak we saw in runs w/ & w/o crossing angle 

  • The result of yesterday’s run did not replicate the previous results => we had forgotten that John changed the summer module in 1008 and did not adjust the analog gain adjustment to the ADC. So the large difference we saw in the previous result is due to this. Yesterday’s run result shown below:


            


  • We are continuing to work on the ZDC calibration:

 





Background Counters ()


Online Monitoring (Chris)


||------------------------------------------------------------------------------------------
|| James L. Nagle   
|| Professor of Physics, University of Colorado Boulder
|| EMAIL:   jamie.nagle AT colorado.edu
|| SKYPE:  jamie-nagle        
|| WEB:      http://spot.colorado.edu/~naglej 
||------------------------------------------------------------------------------------------


  • [Sphenix-run-l] sPHENIX Shift Change Notes (Friday, June 30, 2023), Jamie Nagle, 06/30/2023

Archive powered by MHonArc 2.6.24.

Top of Page