Skip to Content.
Sympa Menu

sphenix-emcal-l - Re: [Sphenix-emcal-l] plan for reading out EMCal channels?

sphenix-emcal-l AT lists.bnl.gov

Subject: sPHENIX EMCal discussion

List archive

Chronological Thread  
  • From: Jamie Nagle <jamie.nagle AT colorado.edu>
  • To: "sphenix-emcal-l AT lists.bnl.gov" <sphenix-emcal-l AT lists.bnl.gov>
  • Subject: Re: [Sphenix-emcal-l] plan for reading out EMCal channels?
  • Date: Thu, 20 Dec 2018 10:42:28 -0700

Hello All,

Thanks for many useful responses.

One question is when applying the jet algorithms, what is the impact of truncating the small energy from many towers -- i.e. only keeping the upwards fluctuations and zero suppressing the downward fluctuations.   Probably worthwhile to have a study from the sPHENIX Jet Group on the impact of different zero suppression levels and alternative approaches.   In a Au+Au central event, the average EMCal tower energy is 53 MeV and so it is not so much a question of whether there is a signal or not -- there is always a signal, just often small.  Of course, minimum bias events will be significantly lower.

Sincerely,

Jamie


||------------------------------------------------------------------------------------------
|| James L. Nagle   
|| Professor of Physics, University of Colorado Boulder
|| On Sabbatical at CEA (Commissariat à l'énergie atomique) / Saclay
|| EMAIL:   jamie.nagle AT colorado.edu
|| SKYPE:  jamie-nagle        
|| WEB:      http://spot.colorado.edu/~naglej 
||------------------------------------------------------------------------------------------


On Thu, Dec 20, 2018 at 9:22 AM Cheng-Yi Chi <chi AT nevis.columbia.edu> wrote:

Dear All:

    If you want to readout the data through DCM II, you have to zero suppressed the data. Each DCM II module has 8 optical inputs. There 4 Stratix II FPGA to handle the fibers. Each FPGA get 2 fibers. Each fiber is 1.6  Gbps bandwidth. Each FPGA output data to the 5th one.  I have dig out the bandwidth limit on that link. If I remember correctly, we assume 50% of input raw bandwidth.

    The DCM II backplane is only at 512 MByte/sec at full speed. The JSEB II max bandwidth is only at 6.25 Gbps/sec. Which translate to about 625 Mbytes/sec.   I doubt we will get that much bandwidth in the real word.

    Even one just do a light zero suppression data data volume goes up because now you have to label the data.


On 12/20/2018 12:34 AM, Martin Purschke wrote:
Hi Dennis,

the plan is recording 16 samples per channel. Please keep in mind that
this is already a change in plans from 12 samples, and that forced us to
reduce the number of digitizers per XMIT board from 4 to 3 to stay
within the front-end bandwidth limits. Past that stage, whatever
generates more data will reduce the event rate, and eventually we will
run into our data logging limits.

The classic zero suppression scheme is to see if those 16 samples
contain a "signal" - where it remains to be determined how we define
that, exactly - and zero-suppress the others. I would assume that we try
to time the trigger in a way that the peak of the waveform is in the
first half of the samples so we have some baseline before and catch the
tail, as you can see in the attached raw waveform. This is from the 2016
test beam where we kept 24, not 16, samples, but you get the idea.

Whatever is contained in those 16 samples is what you have - including
noise. Now we are already pushing hard against the limits of what the
RCF told us is doable. Most of it comes from the TPC, but generally,
almost every current estimate of the overall data logging rate is
exceeding this limit, some even by factors of more than 3. This means
that we will not be able to afford a dramatically relaxed threshold and
keep most, if not all, emcal waveforms for many events. We can tweak
that to the breaking point, but more data/event will mean fewer events.

Do we need such a full view for every event? We could generate a slow
trickle of such full events. Say every few seconds we tag an event
special so the DCM2s skip the zero-suppression (that would need to get
implemented), and then you get some events like that without breaking
the bank. (We are thinking of that for special events such as in-beam
pedestal events on empty crossings). Not sure if that, and what minimum
fraction of events, would satisfy such a fluctuation analysis.

Also, what is your current assumption how stable the baseline is? 1.5
bits? 2? I guess early on we could take special runs with a few hundred
thousand events in a non-zero-suppressed mode and see if that is an
issue or not, but we cannot expect huge statistics here.

We can kick this around some more as needed...

	Martin


On 12/19/18 13:55, Perepelitsa, Dennis wrote:
Hi EMCal group,

There was some discussion at the recent Collaboration Meeting regarding whether we would read out every EMCal channel every event, or if there will be some zero / noise suppression. For the heavy ion jet reconstruction, it may be important to have every channel to get both positive and negative noise fluctuations.

What is the current thinking or plan for both p+p and Au+Au running? What do the currently estimated data throughput and volume numbers assume in terms of information from the EMCal?

Dennis

Dennis V. Perepelitsa
Assistant Professor, Physics Department
University of Colorado Boulder




_______________________________________________
sPHENIX-EMCal-l mailing list
sPHENIX-EMCal-l AT lists.bnl.gov
https://lists.bnl.gov/mailman/listinfo/sphenix-emcal-l



_______________________________________________
sPHENIX-EMCal-l mailing list
sPHENIX-EMCal-l AT lists.bnl.gov
https://lists.bnl.gov/mailman/listinfo/sphenix-emcal-l



Archive powered by MHonArc 2.6.24.

Top of Page