Skip to Content.
Sympa Menu

sphenix-hcal-l - Re: [Sphenix-hcal-l] Beam test data

sphenix-hcal-l AT lists.bnl.gov

Subject: sPHENIX HCal discussion

List archive

Chronological Thread  
  • From: Martin Purschke <purschke AT bnl.gov>
  • To: sphenix-hcal-l AT lists.bnl.gov
  • Subject: Re: [Sphenix-hcal-l] Beam test data
  • Date: Fri, 4 Mar 2016 15:01:37 -0500

John,

in the setup_jseb.sh script, the types are defined, and "junk" is
pre-set. You can change that initial choice at any time.

Why don't you leave the rcdaq_runtypechooser.pl open? It unobtrusive.

The command-line version is "daq_set_runtype". The idea is that you keep
on taking the same type for many runs, and not to answer questions each
time, which would interfere with most automated acquisitions. Basically,
"take the next run the same way as the last until I say otherwise".

Does this help? If that gets totally in the way, we can see how to go
about this.

Martin

On 03/04/2016 02:50 PM, John Haggerty wrote:
> Martin,
>
> Great, thanks. I think the event libs have not propagated to the sphenix
> world at rcf, is that right? If that's right and I'm not mistaken, we'll
> want that.
>
> Also, when you get a chance, we're going to need to select the run type
> at the start of every run... I've already written junk when I meant to
> write cosmics and cosmics when I meant to write LED.
>
> On 3/4/16 1:30 PM, Martin Purschke wrote:
>> Dear all,
>>
>> let me add a bit of new information to this.
>>
>> As John points out, we are taking data with the sphenixdaq machine, on
>> the grounds that this is the one that has the jSEB card that is in use.
>>
>> The data, however, are right now flowing to a disk on the gateway
>> machine (highwaygw.phy.bnl.gov), seen as /data/data/phnxsa/...
>>
>> We are not planning to take this machine to FermiLab -- we then rather
>> direct (and copy what's there) the data to a machine called hcaldaq
>> (192.168.100.40). We will preserve the mountpoint's name. That machine
>> has 6TB of prime and fast disk space.
>>
>> The reason that data are not going to their final destination yet is
>> that JohnH might do some development on that hcaldaq machine that
>> requires some reboots.
>>
>> The elog that existed as an ad-hoc installation on sphenixdaq has been
>> moved to hcalgw as well, and is available on port 7815 (the same as the
>> 1008 Elogs, the phone nr there is easy to remember).
>>
>> I would suggest that we all use the much faster hcaldaq machine for
>> heavy-duty processing. Eventually the data from the DAQ will be local
>> there, and the machine has a lot more memory, 6 CPU cores -- you will
>> have a much better experience. The standard account is phnxsa - as in
>> "standalone" - the same as on the sphenixdaq machine, but other accounts
>> could be made on demand.
>>
>>
>> Now to the cool new things.
>>
>> The hcaldaq machine has what is now the "sPHENIX" version of the event
>> libraries. There has been a lot of cleanup, and there are a few new
>> features that I hope you will like.
>>
>> To begin with, the "-p" switch of the ddump utility now takes a single
>> packet id as before, but also id lists and ranges - you can now say, for
>> example
>>
>> ddump -p 1001 ...
>> ddump -p 1001,1002 ...
>> ddump -p 1001-1005 ...
>> ddump -p 1001,1002,3001-3005 ...
>>
>> Also, reading from a file is now the default, so while the "-f" switch
>> is still there, you can omit it.
>>
>> The "et" functionality is gone, at least for now - it is simply not
>> compatible with 64bit. (Did I mention that all is 64bit?)
>>
>> Instead we have a new feature that can be used in online monitoring. A
>> new event stream connects to a monitoring port in rcdaq, and gets events
>> from the DAQ opportunistically (since we are not allowed to hold up the
>> data taking, we have to tolerate that we miss events).
>>
>> In dlist and dlist you would say
>>
>> dlist -r sphenixdaq
>>
>> and dlist connects to that monitoring stream and gets the next event.
>> You will see that rcdaq often takes data without actually logging them,
>> and this is a great way to look at a few events, say, to time in
>> something, without actually logging them to disk.
>>
>> In a pmonitor project you open that stream with the command
>>
>> rcdaqopen ("sphenixdaq");
>>
>> and can then pstart() your monitoring.
>>
>> If you happen to be on the local machine where rcdaq is running, you'd say
>>
>> dlist -r localhost
>>
>> and
>>
>> rcdaqopen("localhost")
>>
>> but "localhost" is the default and in both cases you can just say "dlist
>> -r" and rcdaqopen();
>>
>>
>> Also, you might want to take a look at the rcdaq manual -
>> http://www.phenix.bnl.gov/~purschke/rcdaq/rcdaq_doc.pdf
>>
>>
>> Best,
>> Martin
>>
>>
>> On 3/3/16 12:35, John Haggerty wrote:
>>> This may not be the final word on this, but since we have moved to
>>> taking data with rcdaq, we've moved to some new data directories, too.
>>>
>>> - Taking data is still done on sphenixdaq, which will be difficult for
>>> most people to get to (right now, the only way in is via the gateway
>>> machine Martin set up in the highbay, called highbaygw.phy.bnl.gov; in a
>>> few weeks, that will be a machine at Ferrmilab, so don't get used to it).
>>>
>>> - The data directory is an automounted NFS volume on highbaygw called
>>>
>>> /data/data/phnxsa
>>>
>>> If the automount falls off, you can cd to it
>>>
>>> - Chris has new place for data at RCF and I made a directory for the
>>> beamtest there:
>>>
>>> /sphenix/data/data01/t1044-2016a
>>>
>>> and I have been copying data from sphenixdaq there periodically. We'll
>>> try and automate that, but probably I'm more excited about the data than
>>> an automaton anyway.
>>>
>>> There are run-type directories below that called cosmics, led, and data
>>> that I'm not wedded to, but right now, we're writing data to cosmics for
>>> the most part.
>>>
>>
>

--
Martin L. Purschke, Ph.D. ; purschke AT bnl.gov
; http://www.phenix.bnl.gov/~purschke
;
Brookhaven National Laboratory ; phone: +1-631-344-5244
Physics Department Bldg 510 C ; fax: +1-631-344-3253
Upton, NY 11973-5000 ; skype: mpurschke
-----------------------------------------------------------------------





Archive powered by MHonArc 2.6.24.

Top of Page