sphenix-run-l AT lists.bnl.gov
Subject: Commissioning and running of sPHENIX
List archive
[Sphenix-run-l] [Correction!]Re: shift change meeting minutes on August 9 (not 10), Wednesday 2023.
- From: shimomuramaya <maya AT cc.nara-wu.ac.jp>
- To: maya shimomura <maya AT cc.nara-wu.ac.jp>
- Cc: sphenix-run-l AT lists.bnl.gov
- Subject: [Sphenix-run-l] [Correction!]Re: shift change meeting minutes on August 9 (not 10), Wednesday 2023.
- Date: Thu, 10 Aug 2023 15:29:48 -0400
Sorry I made a mistake. This was August 9th Wednesday minutes (not for
today).
> 2023/08/10 15:27、shimomuramaya <maya AT cc.nara-wu.ac.jp>のメール:
>
> Dear sPHENIX
>
> Here is the SCM minutes on August 10.
>
> ************************
> General (Stefan/Kin)
> • Kin:
> • The Control group (C-AD) seems to start to switch systems to
> standby. I’ve duly reminded them of our commissioning schedule until ~Oct.
> 3. So, Chanaka de Silva has promised that he won’t touch sPHENIX portion
> until we’re done with commissioning.
> • By the same token, I’ve also reminded Paul Sampson that we’re
> promised until the end of Friday (Aug. 11) for the cryo to keep the Magnet
> operating.
>
> Current Priority Items (PC)
> • Make use of magnetic field, which we’ll have until Friday (maybe
> +weekend?) : take data with tracking detectors: TPC, MVTX, INTT, TPOT,
> HCAL
>
> Work Control Coordinators (Chris/Joel)
> • Chris
> • Plan is to install SEPD electronics box next Wednesday
> • Many of the tasks to complete installation have already been
> completed and others will be completed prior to Wednesday
> • Meeting tomorrow with SEPD team to discuss final prep for
> installation
> Plan of the Day (Stefan/PC/all–to be revisited at end of meeting)
> • Bring TPC to operating HV
> • Martin will take SEB00 and SEB01
>
> ========================================================================
>
> Evening (Virginia, Ross)
> • Took cosmics with GL1, HCal, INTT, TPC +/- TPOT, MVTX
> • TPC work at beginning of shift. TPC experts left HV at safe
> voltage/no gain and TPC was left in big partition to take monitoring data.
> Fast spark detection is active, DO should monitor the gui.
> • Took LED data with EMCal and HCal. Looks like EMCal has the same
> timing change as HCal.
> • MVTX work to put back in big partition. Added back in.
> • TPOT stopped taking data. Hugo called and asked to be taken out of
> big partition to fix it. He fixed it and the TPOT was added back in.
> • Noticed blocks of hot towers in HCal online monitoring. Didn’t appear
> to affect trigger rates so we left it as is
> Night (Silas)
> • Took cosmics data with HCal, EMCal, MVTX, INTT, TPOT, TPC
> • Noted that the MVTX seemed less stable with EMCal in readout,
> probably coincidental
> • Had 4 current dips in mvtx, recovered with gui, gui has error message
> on successful recovery
> • HCal hot tower blocks did not reappear after reconfiguring detector
> • Seb06 continues to be a problem for getting stuck
> Day (Christine)
> • Mostly uneventful shift! Took cosmics as per instructions
> • EMCal trip did not get recovered following the procedure
> • MVTX trip, did not recover correctly (?)
> • Expert call list is a little confusing, request clarity:
> • MVTX: Which expert to call first? Do we call different people
> for different things?
> • Detectors with multiple numbers listed for multiple people: hard
> to tell when new person’s numbers begin (INTT, MVTX)
> • MVTX trimmed to 2 names and numbers, Cameron
> • INTT chiller turned off. Had to call experts. In process of
> recovering. Will put it back in if it’s ready when TPC is ready.
>
> ========================================================================
>
> Magnet (Kin)
> • Magnet at full field.
> • I’ve got the feeling that Roberto (Cryo) is now quite confident that
> they can keep the Magnet at 80K-100K, even when they need to install Blue
> Snake.
> • They’ll let the Magnet (temperature) drift for > a week (to reach
> 80K) before the LN2 kicks in. I’ve checked (for Roberto and myself) that
> drifting after a week in Nov. would take it to ~70K.
>
> MBD (Mickey)
> • Will look into why the laser timing changed (after shift change
> meeting).
> Trigger (Dan)
> GTM/GL1 (Martin/Dan/John K)
> • Stefan: JaeBeom figured out why the LED timing changed for HCal,
> EMCal, and sEPD: on Jul 27 the GTM firmware was upgraded to version 46.
> This changed the unit of clock counters for the lvl1 delay in the
> configuration file from 6xBCO to BCO, but we didn't know or notice that
> since there was no beam for days. This did not accept the forced accepts,
> so LED data still looked good. E.g. an LED run from July 31 looks just
> fine. When beam came back on August 1st, we noticed the change in the
> timing for triggered data and changed all detectors' config files. This
> fixed the triggered data, but the LED data taken in forced accept were
> messed but. But nobody noticed that since later that day of August 1st the
> machine broke and there was commotion from that. Then on Saturday (August
> 5) after the power dip people checked LED runs and noticed the out of time
> for the first time.
> • Work is ongoing to finalize the DB logging for the GL1/GTM
> DAQ (Martin/John H)
> • I would like the option to take over seb00 (Emcal) in addition to
> seb01.
> • granted
> MVTX (Cameron)
> • New clock recovery button was tested and worked
> • It apparently sends an error saying the clock wasn’t recovered
> but I inspected the staves and they were fine
> • Initial look at cosmics showed no recorded BCO (found by Alessio)
> • Traced down to a command not issued on the MVTX side. Stopping
> the run, issuing this command meant we could see the BCO (since about 1pm)
> • Rcdaq plugin was updated by Yasser this afternoon
> TPC (Tom Hemmick, Jin, Takao, Evgeny, Charles, David, Thomas, John K.,
> Ross, Bob)
> • Yesterday 08/08 after SCM and overnight:
> • Jaebeom clarified DAQ operator instructions for running when TPC
> is in big partition. He also added scripts that make the procedure much
> easier (fixing scheduler reset).
> • Progress on commissioning fast protection system:
> • Testing relaxes thresholds in system (300, 400 ADC):
> • Lost 1 stripe in U703 - probably due to mismatch in spark
> protection channel map
> • Also see some reduction in 3 sectors of 0.05 - 0.1 MOhm -
> might be consistent with stuck dust in holes
> • TPC included in big partition, which brings down the trigger live
> time from ~95% -> ~80%/45Hz due to TPC readout spacing protection in
> non-zero suppressed mode (3ms minimal trigger spacing)
> • Overnight: working HV for central membrane and parking HV for GEMs
> • When GEM was below gain, the HV system was very stable
> • New HV monitoring GUI for shift crew
> https://wiki.sphenix.bnl.gov/index.php/Run2023#Operation_Analytics_Site_.28Grafana.29
>
> • Today 08/09
> • Continuing to exercise fast trip protection with slow protection
> (effectively) off
> • Also using diffuse laser as standard candle to estimate gain
> • Gain seems lower than yesterday (higher pressure)
> • Simultaneously searching for laser pulse - no luck yet
> • Gaining statistics from fast protection systems (peak ADC
> distribution)
> • Moved fast protection trip threshold to 500 ADC (less conservative)
> • U607 & U510 lost stripes (0.6 & 0.3 MOhm)
> • New sort of damage, low ADC values in succession
> • U307 lost stripe (0.7 MOhm)
> • This happened after a trip
>
> HCal(Silas)
> • Should check MPOD for i24 to see if it is MPOD issue or sector issue
> when we get the chance
> • One failure of partitioner to boot on one run overnight
> • During evening run, blocks of hot towers were noticed, however, this
> disappeared after reconfiguring the detector
> EMCal (Sean)
> • SiPM Bias voltage tripped in Sector4 IB1, but would not recover using
> standard trip recovery procedure - investigating.
> TPOT (Hugo)
> • Problem with data taking last night: endpoint1 was not being flushed
> by RCDAQ, and thus the buffer was getting increasingly full until it
> reaches its maximum. Endpoint0 was behaving properly.
> • Talking with Jin, the same happened with the TPC, following an update
> of the RCDAQ service. Jin traced it to the latest commit by Martin, and
> reverted it to fix the issue. Same recipe was applied to TPOT and worked.
> Will need follow up
> • Noticed that a few runs taken overnight and this morning were more
> noisy than before. Back to normal now. Will continue monitoring.
> • Also had to power cycle all FEEs this morning, between two runs
> because of several stuck channels.
>
> INTT (Genki)
> • INTT had measured cosmic ray since last night.
> • The chiller was stopped today and recovered.
> • Call list: The order to call is 1. Rachid, 2. Itaru, 3. Genki. Please
> call the number written on the top. If no answer, try the next one.
> sEPD(Tristan)
> • Continuing to collect commissioning data for crosstalk analysis
> • GUI present detector operator computer, can be ignored for now
> • Loose cables found at box end
> • 3 Oddities remain:
> • One which looks very out of time
> • Two with lower gain
> Gas/Cooling ()
> ZDC ()
> Background Counters ()
> Online Monitoring (Chris)
>
> ******end of the minutes
>
> Best,
>
> Maya Shimomura
> ###################
> Associate Professor
> Division of Natural Sciences
> Nara Women's University
>
> BNL Bldg.510C Room2-223
>
> Email: maya AT cc.nara-wu.ac.jp
> Office: 631-344-2778
> Cell : 631-504-2144
> ###################
>
-
[Sphenix-run-l] shift change meeting minutes on August 10, Thursday 2023.,
shimomuramaya, 08/10/2023
- [Sphenix-run-l] [Correction!]Re: shift change meeting minutes on August 9 (not 10), Wednesday 2023., shimomuramaya, 08/10/2023
Archive powered by MHonArc 2.6.24.