Hi Martin and John,
We had talked about making the switch to this configuration earlier, but had, at the time, decided against it. Our rational was that if we lost power/communication to a rack, we could potentially lose the whole detector, rather than half
detector--say if 2e2 was on slot 0 and 2e2 goes down, that knocks out both outer and inner fully, but in the current configuration, loss of communication to one rack would only take out either the east or west side. However, we certainly could revisit this,
or can do processing on the files after the fact to separate it out into outer and inner and then do the merge into two separate inner and outer files.
Cheers,
Silas Grossberndt
(they/them)
Graduate Center and Baruch College, CUNY
sPHENIX, BNL
***ATTENTION: This email came from an external source. Do not open attachments or click on links from unknown senders or unexpected emails.***
John,
this is a good idea. We went rack-centric as power became available, and
I at least never thought beyond that setup.
Now that we have both racks on all the time, I think Stefan and the gang
should consider the fiber re-arrangement.
Best,
Martin
On 3/29/23 23:39, John Haggerty wrote:
> I think this might have been discussed, but I have to ask... are you
> sure you want the OHCAL separated into east and west? If the OHCAL (and
> IHCAL) are in one DCM, they will be in one file, which seems nice for
> displays, clustering, and whatever. It may not be that way forever
> depending on what we find with speed limits, but it seems like it would
> make it easier at the beginning. Just a thought.
>
> On 2023-03-29 18:13, Martin Purschke via sPHENIX-HCal-l wrote:
>> Dear HCal aficionados,
>>
>> I tried my hand at running the whole HCal with the full run control
>> and all. After a few issues, I got it.
>>
>> First off, the filerules for west (seb16) and east (seb17) were the
>> same. I changed this to include "East" or "West" in the file names:
>>
>>> -- defined Run Types:
>>> cosmics -
>>> /bbox/commissioning/HCal/cosmics/cosmics_West-%08d-%04d.prdf
>>> junk -
>>> /bbox/commissioning/HCal/junk/junk_West-%08d-%04d.prdf
>>> led - /bbox/commissioning/HCal/led/led_West-%08d-%04d.prdf
>>> led_tower -
>>> /bbox/commissioning/HCal/led_tower_by_tower_hcal/led_tower_West-%08d-%04d.prdf
>>>
>>> pedestal -
>>> /bbox/commissioning/HCal/pedestal/pedestal_EAST-%08d-%04d.prdf
>>> pulser -
>>> /bbox/commissioning/HCal/pulser/pulser_West-%08d-%04d.prdf
>>
>> so the 2 RCDAQs can log at the same time. I also changed the logfile to
>>
>>> H=$(hostname)
>> ...
>>> LOGFILE=$HOME/rcdaq_${H}.log
>>
>> so they don't step on each other.
>>
>> Since run control needs to be able to get at the participating servers
>> and the GTM, wehn using gtm02 one needs to run RunControl on daq01.
>> (We talked about changing yours to reside on the DAQ network soon, I
>> didn't get around to it today).
>>
>> I talked about this before - the way it works is that RC goes to all
>> participating servers and starts a run with the same run number, sees
>> that they are done and primed, and then issues the equivalent of a
>> gtm_startun. On endrun, gtm_stop comes first, then ends the runs on
>> all servers.
>>
>>> [phnxrc@seb16 HCal]$ ls -l /bbox/commissioning/HCal/junk/*4395*
>>> -rwxr--r-- 1 phnxrc sphenix3 2362998784 Mar 29 17:33
>>> /bbox/commissioning/HCal/junk/junk_East-00004395-0000.prdf
>>> -rwxr--r-- 1 phnxrc sphenix3 2068283392 Mar 29 17:33
>>> /bbox/commissioning/HCal/junk/junk_West-00004395-0000.prdf
>>
>> Ok, nothing is perfect; they are off by one event in the count -
>>
>>> [phnxrc@seb16 HCal]$ ddump -p 10 -i -t 12
>>> /bbox/commissioning/HCal/junk/junk_East-00004395-0000.prdf
>>> -- Event 12344 Run: 4395 length: 16 frames: 1 type: 12 (End
>>> Run Event) 1680125638
>>> [phnxrc@seb16 HCal]$ ddump -p 10 -i -t 12
>>> /bbox/commissioning/HCal/junk/junk_West-00004395-0000.prdf
>>> -- Event 12345 Run: 4395 length: 16 frames: 1 type: 12 (End
>>> Run Event) 1680125638
>>
>> but still a hopeful start.
>>
>> Attached is a screenshot of the RunControl GUI at work, together with
>> the 2 servers' rcdaq_status GUIs, for feel-good value (they are just
>> to look at, they are completely passive).
>>
>> Also, as we discussed, the event numbers are not synchronized, each
>> GUI updates on an interval, so in this view they are all a bit
>> different.
>>
>> Best,
>> Martin
>>
>> _______________________________________________
>> sPHENIX-HCal-l mailing list
>> sPHENIX-HCal-l AT lists.bnl.gov
>>
https://nam12.safelinks.protection.outlook.com/?url="https%3A%2F%2Flists.bnl.gov%2Fmailman%2Flistinfo%2Fsphenix-hcal-l&data=05%7C01%7Csgrossberndt%40gradcenter.cuny.edu%7Cd929f341544b409ee48408db310c2726%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638157700505721905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=j6b7NHOWaQQhEF1ZAObVZSJUBjz6R7y8UwMEHujDeL8%3D&reserved=0
>
> ---
> John Haggerty
> haggerty AT bnl.gov
> cell: 631 741 3358
--
Martin L. Purschke, Ph.D. ; purschke AT bnl.gov
;
https://nam12.safelinks.protection.outlook.com/?url="http%3A%2F%2Fwww.phenix.bnl.gov%2F~purschke&data=05%7C01%7Csgrossberndt%40gradcenter.cuny.edu%7Cd929f341544b409ee48408db310c2726%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638157700505721905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6WmaqQMJWNgccDk8%2F5yIjTzmhvkuQ6Xa3lJes45bqSQ%3D&reserved=0
;
Brookhaven National Laboratory ; phone: +1-631-344-5244
Physics Department Bldg 510 C ; fax: +1-631-344-3253
Upton, NY 11973-5000 ; skype: mpurschke
-----------------------------------------------------------------------
_______________________________________________
sPHENIX-HCal-l mailing list
sPHENIX-HCal-l AT lists.bnl.gov
https://nam12.safelinks.protection.outlook.com/?url="https%3A%2F%2Flists.bnl.gov%2Fmailman%2Flistinfo%2Fsphenix-hcal-l&data=05%7C01%7Csgrossberndt%40gradcenter.cuny.edu%7Cd929f341544b409ee48408db310c2726%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638157700505721905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=j6b7NHOWaQQhEF1ZAObVZSJUBjz6R7y8UwMEHujDeL8%3D&reserved=0