Skip to Content.
Sympa Menu

e-rhic-ir-l - Re: [E-rhic-ir-l] Discussion of parameter choices

e-rhic-ir-l AT lists.bnl.gov

Subject: E-rhic-ir-l mailing list

List archive

Chronological Thread  
  • From: "Palmer, Robert" <palmer AT bnl.gov>
  • To: "Aschenauer, Elke" <elke AT bnl.gov>
  • Cc: "E-rhic-ir-l AT lists.bnl.gov" <E-rhic-ir-l AT lists.bnl.gov>
  • Subject: Re: [E-rhic-ir-l] Discussion of parameter choices
  • Date: Thu, 9 Mar 2017 19:43:24 +0000

Great. You are indeed superfast and I am happy that we are pretty much on the same page. Some points you bring up:

 

1. Re inverse cross sections:

I see the use of inverse cross sections as reflections of what we can do in some as yet undefined time. But with twice the luminosity we get twice the data in the same time. Thus the measure of Luminosity times efficiency is the right comparative measure of what we can expect to do.

 

2. Re lowering dp/p:

It would be nice to perform the Fourier transform with efficiencies based on Richards results with, or without, the assumption of lower momentum spread (1/2), raising the R3/R4 efficiencies by 2.4 and reducing  luminosity to 0.66. Of course interpreting the resultant structure shape errors will not be unambiguous, depending on what aspect we are most interested in.

 

3. Re divergence effects:

Would you guys be able to redo a Fourier transformation with added pt errors from larger divergences? The errors will, of course be much worse at low pt than high and could be better corrected for with greater statistics, but a first stab would be to just insert them, do the transform, and see what effect they have. This might be instructive. Again one would want to do it for at least two cases: a) randomly scatter the pts corresponding to an average divergence A with the same statistics A, and b) randomly scatter them by a larger assuming a larger ave divergence B using better statistics corresponding to luminosity B, where the luminosity (inverse cross sections) are proportional to divergence^2).

                                                                                                      

                                                                                       Ave divergence                   Inverse cross section

                                                              A                               100 murad                               10 fb^-1

                                                              B                               200 murad                               40 fb^-1

                                                              C                               316 murad                               100 fb^-1

 

I am deliberately changing divergences in both x and y since the physics is xy independent. I am suggesting you do this with  fixed efficiencies, say 10% below .4 GeV/c and 50% above 0.4 GeV/c, to keep the two effects separate.

 

Bob

From: Aschenauer Elke-Caroline [mailto:elke AT bnl.gov]
Sent: Thursday, March 09, 2017 1:38 PM
To: Palmer, Robert <palmer AT bnl.gov>
Cc: Aschenauer, Elke <elke AT bnl.gov>; Petti, Richard <rpetti AT bnl.gov>; E-rhic-ir-l AT lists.bnl.gov; Blaskiewicz, Michael M <blaskiewicz AT bnl.gov>
Subject: Re: [E-rhic-ir-l] Discussion of parameter choices

 

On Mar 9, 2017, at 11:38, Palmer, Robert <palmer AT bnl.gov> wrote:

 

Dear Bob.

 



All

 

Referring to the slides:

 

https://dl.dropboxusercontent.com/u/71472420/RP_study_with_dispersion_2.pptx

 

It is clear from the later slides that:

1. from Slide 16 left top: In the assumed input the errors are worse at high Pt because the cross sections fall with pt

 

we agree

2. from slide 16 right middle:  We need some data at Pt down to 0.18 GeV/c. Having no data below 0.44 GeV/c is a disaster.

 

again we agree 



But

3. Slide 19 shows that the loss, of low pt statistics, by a factor of 10, is not a disaster.

 

yes, not for the fourier transform, but for acceptance corrections and so on you get a large systematics as you measure only a small part  of the total phase space. Also one thing we did not study is if the functional shape is not exponential but dipole like if the effect is the same.



4. Slide 17 shows that the loss everywhere by a factor of 10 is much worse.

 

yes, this is true.



5. I will conclude, even without the case of a loss of just the high pt, by a factor of 10, that it is worse to lose a factor of 10 at high pt, than the same loss at low pt.

 

yes, this is most likely true.



 

This tells me that even if we do some running with High Acceptance (but lower Luminosity) parameters, to better measure low pt tracks, we may still want also to run with higher luminosity to give better data at higher pt.

 

yes, to have a good accuracy at high pt you need more luminosity to compensate the fall off of the cross section.



 

But now look at Richard’s earlier results. His slide 9 bottom left, is disappointing, but remember that

a) he does not include the forward calorimeter that will raise the efficiency at the high pt end to near 100%;

 

yes, the spectrometer is not included, because with the bending right now it would not really work. Also we need to really work out what such a spectrometer would be, what technology and so on. This needs some thinking.



and

b) He is using parameters with three times the luminosity assumed in the later slides and my points 1-5; so

 

Bob, we did not assume a peak luminosity for the studies, we use total integrated lumi, which depending on what the peak/average lumi of the machine is takes shorter or longer to accumulate. This is the 10 fb^-1 vs 1fb^-1 nothing else.



c) It shows efficiencies below 0.4 GeV/c of only 7-8 %, but this corresponds about 20% of luminosity times efficiency, while that at higher pt is around 50% x 3 = 150% . With the forward spectrometer, that rises to nearer to 300%. These are both well above that used  in the slide 19 example, and not obviously a disaster.

 

Bob, I cannot follow this, In a study we would do we would simulate an integrated lumi of 1fb^-1 and would weight this with the acceptance.

So higher lumi either better uncertainties or less needed running time.

Also please keep in mind there is a part of the imaging program (the one needing polarisation) which requires 100 fb^-1 integrated lumi.



 

Can this be improved?

 

Richard is doing an analysis with half the momentum spread. I can make an estimate of what he will conclude by assuming the case when dispersion x momentum spread there dominates over the betatron size. The efficiency then is set just by the area under the xl (outgoing proton moment as fraction of their incoming momenta) with values less than (1.0 -  Disp x dp/p). using the distribution from Elke for 20 x 250 GeV, I get

 

            dp/p               efficiency               Luminosity              Product

6.5 e-4                   7%                         2.89                        0.202                     1.0

            3.25e-4                   17%                          1.91                        0.324                     1.6

            1.62e-4                   34%                          1.05                        0.357                     1.76

 

 

Yes, as usual Rich was super fast and attached is the plot with 1/2 of beam spread.

 



I am taking the 7% here to be a confirmation of Richard’s 7-8%. If So, with half the momentum spread (presumably with twice the bunch length) I am expecting him to get around 17%. If I use Mike B’s code for hourglass and crab effects for twice the bunch length, I get Lum=66%. We appear to win by a factor 1.6 giving Lum x eff = 34% below 0.4 GeV/c and something approaching 200 % at the high energy end using the forward spectrometer. These are only rough estimates, but are encouraging.

The alternative approach would be to go back to ‘High Acceptance’ parameters. This would give around 50% at low pt but only 100 % at high pt. This is not obviously an improvement. The loss of high pt data could beat the gain at low pt. More study is needed. Some ideal mix of High Acceptance and High Luminosity might give the best overall performance. At least for the moment, the conclusion is that we have more than one approach; we have the tools; and we will continue the studies.

 

yes, we have indeed developed a lot of tools which allow us to quickly study impacts.

 

Here is an other one from Rich the increase of the magnet acceptance by 10% It helps quite a bit at high pt.

 



 

There is a quite separate question about the relative advantages of higher luminosity  using higher divergences in the y direction. The gains with luminosity will have to be balanced by the increased errors in pt measurements with or without transverse momentum dynamic fitting. Again we have the tools and will continue the studies.

 

This one I need to  think more about, and indeed we don't need all answers by the 2nd of April.

 

Cheers 

 

Rich and Elke



 

Bob

 

 

 ( `,_' )+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=

  )    `\                                                     -

 /    '. |                                                     +

 |       `,              Elke-Caroline Aschenauer               =

  \,_  `-/                                                       -

  ,&&&&&V         Brookhaven National Lab                         +

 ,&&&&&&&&:       Physics Dept.,            25 Corona Road         =

,&&&&&&&&&&;      Bldg. 510 /2-195          Rocky Point, NY,        -

|  |&&&&&&&;\     20 Pennsylvania Avenue                 11778       +

|  |       :_) _  Upton, NY 11973                                     =

|  |       ;--' | Tel.:  001-631-344-4769   Tel.:  001-631-569-4290    -

'--'   `-.--.   |                           Cell:  001-757-256-5224     +

   \_    |  |---'                                                        =

     `-._\__/     Mail: elke AT bnl.gov        elke.caroline AT me.com          -

            =-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=-+=

 

 

 




Archive powered by MHonArc 2.6.24.

Top of Page