sphenix-run-l AT lists.bnl.gov
Subject: Commissioning and running of sPHENIX
List archive
- From: Stefan Bathe <stefan.bathe AT baruch.cuny.edu>
- To: sphenix-run-l AT lists.bnl.gov
- Subject: [Sphenix-run-l] Minutes shift change 2023-05-30
- Date: Tue, 30 May 2023 16:28:16 -0400
General (your name)
Power switch-over test tomorrow 9:30-10:00am: pause magnet ramping for that time
Beam overnight (starting around 8:00pm): confirm detectors are ok after magnet was on
2-h beam off period for TPC first
Dump tomorrow at 7:00am for blower motors (1-2 h)
Resume beam tomorrow afternoon
Thursday: stochastic cooling setup
After that hopefully back to 56x56
Magnet being ramped as we speak
Chi will be here tommorrow
Work Control Coordinators ()
Fixed SW ECW pipe
Mike and Bob: south side diffuse laser panel holding 100 psi, testing north right now
If that hold, ready to introduce ECW tomorrow (will coordinate with Peter Hamblen)
CDU300 is back up (confirmed with Rob that looking OK)
sPHENIX permissive key: inconsistency with power supply group fixed
A/C work hopefully tomorrow (will take three days total, needs access to IR)
Plan of the Day ()
Start in local mode when beam comes back ~10:00pm - midnight
If Jin available at midnight: include TPC in global mode (with HV on)
If not, then we run in global mode with MBD, Ll1, HCal (stop all runs at 100 k events)
Evening (Anders)
Inherited a 56x56 fill.
Had one complete 28x28 fill.
Passed a 28x28 fill off to the next shift.
Many DAQ problems, which experts helped with.
Night (Murad)
Inherited beam from evening shift. At ~3:00 am MCR called to extend the fill for 20 min to give time to STAR to resolve trigger issues - dumped at ~3:30 am.
Another fill at ~4:30 am and dumped at 7:30 am as scheduled.
Restricted access / opened plug door for carpenters and techs
Quite shift with no problems - ran LL1 + MBD
Day (Silas)
Inherited no beam and RA
At ~8:10 am we were informed that the ECW had to be turned off. Had to turn off all LV, bias and racks
ECW restored around 12:30
Poletip doors closed, interlocks switched to allow magnet to ramp
Magnet started ramping at 2:25
MBD (Alex)
Time-in GUI can now be operated from operator 1
Took laser runs with field off this morning at different HV settings. Will take runs with field-on and compare.
Shift crew took two long runs with 3M+ events with MBD at ~200 Hz overnight, which is a statement on the current state of the DAQ.
Now that we are testing higher rates, we sometimes hit an issue with the DAQ readout which crashes seb18, causing a problem since rcdaq_server needs windrvr, which was not automatically loading. Today we modified seb18 so that windrvr1260 is loaded on bootup, so this problem should be gone.
ZDC (Peter)
Coincidences should show up on Jin’s monitor soon
Trigger now goes into front panel (enabled in latest Gl1 firmware update)
ADCs: fibers are checked out, need to be plugged in
Background Counters (John)
Trying to get South in tomorrow
Trigger (Dan)
Tonight moving over to fiber inputs to get timing coincidence
Trigger was stable over weekend
Trigger readout from weekend seems to have not been configured correctly
DAQ (Joey)
Last night: minor hiccup with windriver - solved by reloading.
No major developments today (power off)
HCal (Silas)
Took test pulse, found out we needed to retime in the test pulse for the new delay
The high tower in pedestal has an ok waveshape
EMCal (Tim, Anthony, Sean)
Taking high gain noise data (single pixel test)
Took one run at 10Hz, now taking run at 100Hz
Can get all SEB’s checking in, some drop out after run ends though, re-running start-up script seems to help
TPOT (Bade)
TPOT was ramped down to “off” as of 9am Tuesday morning. This is to ensure the safety and health of the detector as maintenance and magnet operations proceed.
Attempted to take data between 11pm Monday - 1:30am Tuesday, before ramping up the magnet. There was a GTM to EBDC39 communication issue and data taking was unsuccessful despite the attempt at all the available recipes on wiki. The issue was resolved around noon Tuesday, thanks to John K.
As of 2:30pm, LV is back on and started doing similar tests with purely random trigger (not clock) and will continue in the coming days while waiting for the beam to come back.
TPC (Tom, John K., Takao, Evgeny, Charles, Jin)
HV test planned tonight, no beam period/no-B-field, with Tom available
LV/HV off most of today after ECW off. LV back on now, Takao is recovering FEE links, (with magnetic field presence)
Firmware/kernel development .
GTM was in local since 9PM last night, took the opportunity for a 10hr monitoring run with GL1 trigger last night until LV was turned off
If possible, we would like TPC to monitor run in either GTM-local or global modes during the magnet ramps; the goal is to record problems, if any, with field changes.
INTT (Rachid)
INTT detector is back to normal operation (cooling flow and alarms), and LV/HV is OFF
We hope to get INTT into the big partition to take data with beam collisions and with other sub-detectors. To ensure INTT detector timing is correct, it is necessary to do hits correlation INTT vs other sub-detectors.
MVTX (Cameron)
Tested our new MVTX on/off indicator this morning before ECW cut. Works as expected and will work with only one FELIX server taking data. Note: The stave on light can “flicker” as we turn on the power units as their changing current goes above the “stave on” value.
Two new firmwares were developed in the last few days. We need some time to test them and see how they perform with the real detector (they were tested on our telescope)
Standing order is NOT to have the MVTX staves on during magnet ramps
Readout units can be left on safely
sEPD()
Gas/Cooling ()
Magnet (Kin)
Testing and ramping …😰
Just ramped to the top at 4:07 pm. Will stay for 1 hour … before we do other tests.
Stefan Bathe
Baruch College, CUNY
and RIKEN Visiting Scientist
Baruch: BNL:
17 Lexington Ave Bldg. 510
office 940 office 2-229
phone 646-660-6272 phone 631-344-8490
----------------------------------------------------------------------------------
- [Sphenix-run-l] Minutes shift change 2023-05-30, Stefan Bathe, 05/30/2023
Archive powered by MHonArc 2.6.24.