sphenix-electronics-l AT lists.bnl.gov
Subject: sPHENIX discussion of electronics
List archive
[Sphenix-electronics-l] [Wed 2:30 PM] meeting to address Review Reports for T-1044 2016 beam test paper
- From: "Huang, Jin" <jhuang AT bnl.gov>
- To: "sphenix-emcal-l AT lists.bnl.gov" <sphenix-emcal-l AT lists.bnl.gov>, "sphenix-emcal-l AT lists.bnl.gov" <sphenix-emcal-l AT lists.bnl.gov>, "sphenix-electronics-l AT lists.bnl.gov" <sphenix-electronics-l AT lists.bnl.gov>
- Subject: [Sphenix-electronics-l] [Wed 2:30 PM] meeting to address Review Reports for T-1044 2016 beam test paper
- Date: Tue, 15 Aug 2017 16:58:43 +0000
Following the 1st review comments we received on the 2016 test beam paper, lots of progresses have been made addressing them. Meanwhile, a few points require a discussion with the calorimeter groups. Therefore, we would like to call for a meeting with both EMCal and HCal groups to determine our strategy on addressing them.
We would like to use the alternative slot for the bi-weekly HCal meeting for this meeting: Wed Aug 16 2:30PM-3:30PM @ BNL 2-219.
· The agenda page is : https://indico.bnl.gov/conferenceDisplay.py?confId=3466 · And phone bridge: https://bluejeans.com/530634249
Looking forward to talking to you tomorrow,
Cheers,
The 2016 T-1044 paper committee
______________________________
Jin HUANG
Associate Physicist Brookhaven National Laboratory Physics Department, Bldg 510 C Upton, NY 11973-5000
Office: 631-344-5898 Cell: 757-604-9946 ______________________________
From: Huang, Jin
Dear Collaborators
Please find below the 1st review report for the T-1044 2016 beam test paper, which was submitted to IEEE Transactions on Nuclear Science and arXiv:1704.01461 in Apr 2017.
The general tone from the three reviewers and the editors are positive regarding the significance of the paper. But revisions on many specific points are required. And we are encouraged to resubmit a revised manuscript within five weeks from today.
We will work with the calorimeter groups for responding to these comments and for updating the manuscript: · The response to the comments will be edited on Google Doc: https://docs.google.com/document/d/1CVHhNSWhvIUj4Ce825IWWACsI5ySOzrryr5YMqT7ga0/edit?usp=sharing (live document) · The manuscript update on OverLeaf: https://www.overleaf.com/5093111kyqrrb (live document, same link as before) · More information and overall status on wiki: https://wiki.bnl.gov/sPHENIX/index.php/T-1044_publication#Paper_submission
We would like to thank everyone again who contributed to the manuscript and beam test in the past. We will appreciate all the continuing help in completing this revision too.
Best regards,
The 2016 T-1044 paper committee
______________________________
Jin HUANG
Associate Physicist Brookhaven National Laboratory Physics Department, Bldg 510 C Upton, NY 11973-5000
Office: 631-344-5898 Cell: 757-604-9946 ______________________________
-----Original Message-----
Dear Dr. Huang:
The reviews of your manuscript,
Design and Beam Test Results for the sPHENIX Electromagnetic and Hadronic Calorimeter Prototypes (No. TNS-00153-2017),
have been received and are attached below. The reviewers and editors felt that a number of major issues needed to be addressed in your paper before it would be acceptable for publication. Thus we will not be able to consider it further for publication in its current form.
However, because we believe the work has merit, we invite you to consider submitting a revised manuscript that takes all the reviewers' and Editors’ comments into consideration. It would be given a new Manuscript ID and be fully reviewed again.
If you choose to resubmit, please send the reworked manuscript no later than 5 weeks from today, but preferably as soon as possible. You can use the following link to start your resubmission without logging in:
*** PLEASE NOTE: This is a two-step process. After clicking on the link, you will be directed to a webpage to confirm. ***
https://mc.manuscriptcentral.com/tns-ieee?URL_MASK=bba5f145c7ed4308ad2dc97f6cdcaa58
Or, you can resubmit by accessing your Author Dashboard, and clicking 'Manuscripts Awaiting Resubmission' along the left hand side. Once there, click 'create a resubmission' in the Action column next to the manuscript ID of your previous submission. This will start the submission process for your resubmission.
**Please note: your manuscript will receive a new manuscript ID, but using the resubmission option will automatically link your resubmission to the previous version; please do not create an entirely new submission.
With your resubmission, you must provide a point-by-point response detailing the changes you have made to address the reviewers' concerns and the changes you made to address the editors' comments in 'Step 1: View and Respond to Decision Letter". On your revised manuscript, please highlight all of the changes you have made in response to the reviewers’ and editors' comments. Failure to provide a clear list of the changes and a copy of the manuscript with the changes highlighted may result in the rejection of your manuscript. If you disagree with any of the reviewers' recommendations, please make sure that you include a detailed discussion of why you disagree in the response box.
Most of the previously entered information from your prior submission will carry over, however, please fill in any missing information in the required fields.
In 'Step 6: File Upload', you will upload your new manuscript file and click 'Save & Continue', before you Review & Submit in Step 7.
If you have any administrative inquiries or questions regarding ScholarOne Manuscripts please contact tns-editor AT ieee.org for assistance.
I look forward to hearing from you soon.
Sincerely, Maria Grazia Pia
Reviewers' Comments
Reviewer: 1
Comments and suggestions for the author Although the article seems to be well written, clear and correct, and I think it is interesting and appropriate for the journal, there are some limits from the statistical point of view.
There are not evident mistakes in the adopted statistical techniques or in the interpretation of statistical results, but the methods proposed do not properly value the scientific contribution of the work.
The main point is the lack of use of inferential methods. All the graphs presented and commented are good statistical tools that provide useful preliminary information about the problem. These results have only a descriptive value but the conclusions cannot be generalized and therefore are not as interesting for the scientific community as they might be.
The graphs merely help to characterize and "read" the analyzed data, but the adoption of suitable inferential techniques and in particular of methods for hypothesis testing are necessary.
For instance, on page 13, when you comment the linearity of the plot, you don't mention any statistical test. The plots represent parabolas, hence non-linear equations. To conclude in favour of linearity, you should test this hypothesis (the curve is a strict line) against the hypothesis of non-linearity (the curve is a parabola), or alternatively you should test the significance of the coefficient of the second order term in the regression model.
Another example is related to Fig.27 (page 17). You speak of "good agreement", "excellent agreement", etc. To make it a meaningful scientific result, you must apply suitable statistical tests of goodness of fit (e.g. Kolmogorov-Smirnov, Anderson-Darling, chi-square, etc.), choose a significance level and apply a consistent decision rule (e.g. by comparing p-value and significance level).
Similar considerations about the need for statistical tests, apply to Fig.22, Fig.24, Fig.28, Fig.29.
Page 13, right column, 11 lines from the bottom: "...which significantly reduces...". If you speak of statistical significance you must mention the p-value and the significance level or other elements useful to get inferential conclusions. Otherwise replace the adverb with "clearly" or similar non-statistical terms.
Some useful references about statistical tests are:
- Bonnini S, Corain L, Marozzi M, Salmaso L (2014). Nonparametric hypothesis testing. Rank and poermutation methods with applications in R. Wiley: Chichester. - Conover WJ (1999). Practical nonparametric statistics. Wiley. - Hollander M, Wolfe DA (1999). Nonparatetric statistical methods. Wiley.
Reviewer: 2
Comments and suggestions for the author The paper ‘Design and Beam Test Results for the sPHENIX Electromagnetic and Hadronic Calorimeter Prototypes’ by C.A. Aidala et al. presents a design, construction and test beam studies of the electromagnetic and hadronic calorimeters prototyps intended for sPHENIX experiment. The paper is very well written and provides a complete and thoughtful description of all phases of the project. The physics requirements of the experiments for the performance of the calorimeters are not very demanding, hence the practical aspects imposed by the geometrical constraints and affordability were probably the primary drivers for the design. Relatively unique feature of the presented calorimeters is very large variation of the sampling fraction along the depth of the calorimeter, especially in hadronic part. The paper contributes significantly to the existing body of knowledge about the calorimeters and well deserves publication. The paper is excellent as it is, but there are several improvements which ought e be considered to make it even better (in the chronological order): [Apologies: page numbers below refer to the pdf file and they are offset by 1 with respect to the paper itself] 1 P3, left, line 51: the fiber diameter ought to be mentioned 2 P4, right, line 16 has a typo: cneter should read center 3 P5, right, line 59: ‘outer radius of a rectangular tile’ is better referred to as ‘edge’ 4 P8, left, line ~33: it would be a good idea to specify the overvoltage at which the MPPC’s are operated 5 p 10, right, line 23, p12, left line 55 and many references in the text and figure thereafter: the beam momentum byte is assumed to be 2% using the beam simulation (ref 21). The lead glass (p12) calorimeter was used to measure the observed with of the distribution and decompose it into the stochastic and constant term. As far as I understand the flow of the argument the dp/p of 2% was assumed to infer a systematic error of the lead glass to be 1.4%. It probably places too high weight on the FTBF simulation and it may well be (and it is more probable) that the beam momentum dp/p is rather 2.5%, rather than 2%. The beam dp/p is used consistently throughout the paper to infer the constant term of the presented calorimeters and the use of too small dp/p results in apparent degrading of the resolution of the calorimeters. For vast majority of the presented results the beam spread is completely irrelevant, except for the EM resolution. Using more realistic dp/p would reduce the constant term at 10 degrees to about 0.3%. Perhaps there are good reasons to accept the simulation value of dp/p = 2% and in such a case they should be mentioned to justify the treatment of dp/p. 6 p 11, left line 8: perhaps ‘placed in the beam and rotated’ can be a better _expression_ to avoid a possible confusion about ‘beam rotated’ 7 p 11, left, line 47 some words (‘have’) seem to be missing 8 p 13, right, last two bullets: increasing the area of accepted beam impact points degrades the resolution because of the spatial non-uniformities of the detector and imperfect experimental correction. But the light yield and the resulting stochastic term in the resolution should be pretty much the same. The procedure of separation of the stochastic term from the constant term is imperfect and it is probably responsible for the values presented. I think authors should consider of using the same stochastic term as determined in the small area of the detector and determine the degradation of the resolution as expressed in the increase of the constant term only. 9 P 15, left, line 48. ‘hadron rejection’ is presented and discussed without its definition. The actual procedure is discussed later, in the right column. Perhaps one can re-order this section slightly by describing the method first.. 10 P17, right, line 40-41. The non-linearity of the electron response is attributed to the non-linearity of e/h. It is a tautology, given the observation that the response to hadrons is linear. Given the specific features of the presented calorimeters (for example, but not limited to variation of the sampling fraction with depth) the linearity of both, the electromagnetic and the hadronic calorimeter, is an open question. It is observed that the hadronic calorimeter is linear and the electromagnetic is not (very often it is the other way around). It implies that e/h is not-linear too, but it is an empirical statement and not a given fact. 11 P18, right, lines 54-56. Relative calibration of the EM and hadronic sections is always a tricky part in case of inhomogeneous calorimeters and it should be described in more details. 12 P18, right, line 57 states that it is expected that population of HCAL and HCALOUT should increase with energy. Perhaps the basis of such expectation should be mentioned. The relative population of different samples is given by the relative probability of a hadron traversing the M or EM+inner HCAL without inelastic interaction. Hadron nucleus inelastic cross sections do not vary with energy very much, hence one might naively expect the relative populations to be the same at these energies. 13 P18, the bottom. There is no mention how the energy of the events in different categories is calculated. Presumably in all the cases the observed energy in three compartments is summed up, regardless of the event categorization. It would be good to clarify that. 14 P19, lines 37-42. There are three samples of events and linearity of the energy measurement can be determined for each of the sample. The text refers to the ‘normalization of the FULL sample to the input energy’ and the resulting impact on the linearity for the two other samples. It should be elaborated, it is difficult to understand what is being done and how does it affect the linearity of the response.
Reviewer: 3
Comments and suggestions for the author Dear Colleagues,
thank you very much for your nice paper draft. It looks very complete to me. There are a few comments below, which could improve the paper a bit more.
The introduction could benefit from more information on the general setup and physics case at RHIC, which beam particles, beam energy, on page 2/19 in line ~50 right column the physics requirements are mentioned and nothing is mentioned about this. Sure, it is a detector paper, but a few enlightening sentences would be helpful also to understand the detector properties. For example 2/19 in line ~19 right column it is said that the “calorimeter will be used for identifying photons and electrons”. Well, usually one measures also positrons with an elm calorimeter. Why is this not important. In line ~26 the “enabling of full jet reconstruction at mid-rapidity “ is mentioned. If one does not know, what the energies are one feels a bit missing information on what the requirement on the detector is.
Missing word? line 50 left column: .. .with (+a) tracking system…
Page 7/19, line 43, right column: I like very much, that for the elm and the had part common electronics designs and commercial components have been chosen. The system will be much easier maintainable with this choice.
Page8/19, line 33, left column: … MultiPixel Photon Counter(-s) has been selected … Line 59: I like the organization of the DSC , again very efficient for the future maintenance.
Page 9/19: several lines: please change slow controls into slow control(-s) E: SiPM temperature compensation Why was it not considered to keep the SiPMs at constant temperature for example with Peltier-Elements ?
Page 11/19, line 47, left column: …. Is required to (+have) only one valid ….
Page 13/19, line 19, figure caption, … in columns and rows (-using) (+used) for energy calibration…
Page 16/19, line 36, figure caption … dash(+ed) lines… Line 6, right column, … is performed with 4 GeV muons ( (+corresponding) to the mean muon energy ….
On page 16/19, line 41, left column .. [28] is the (+simulation) most consistent with the test beam data.
Unclarity on Birks´constant: On page 16/19, line 41, left column : the preferred value is 0.0794 On page 17/19, line 46, right column and on page 18/19, line 40, left column : the preferred value is 0.2 … What should the reader now conclude ? Which value is now taken for the overall simulation ? Or is it possible to set it differently for different geometrical parts.
Editor's Comments
Editor: Pia, Maria Grazia Comments to the Author: First of all, I apologize for the abnormal delay of this manuscript in the review process; one of the reviewers repeatedly postponed returning the review report.
The reviewers expressed some positive comments about the paper, but they also identified various issues to be addressed. A few additional comments are summarized here.
In the present form the manuscript reads like a project report rather than a scientific research paper: it extensively describes the characteristics of the detector prototype and reports some test beam measurements and simulations, but it does not adequately discuss how the detector relates to the state of the art in the field, whether any of its technological aspects contribute to advancements in calorimetry or address novel experimental requirements that cannot be satisfied by established techniques, and whether its operational performance demonstrates any improvement over existing technology. These issues deserve proper attention to characterize the work as a research paper.
One of the reviewers remarked that the manuscript lacks proper statistical inference; many statements remain at qualitative, subjective level (e.g. "good agreement", "excellent agreement"), which is inappropriate to a scientific publication. The reviewer's recommendations identify constructive actions to take to address the deficiencies of the manuscript in this area.
The manuscript reports some simulations based on Geant4. The Geant4 physics configuration used to produce the results is not justified: it is defined as "recommended", but the manuscript does not document any objective motivation for using it among the many possible physics modeling options available in Geant4, either directly of through pertinent references in the literature reporting quantitative validation results. The choice of the physics configuration should be justified on sound grounds; if it is not possible to identify a single physics configuration option as the most appropriate for the experimental scenario of sPHENIX, one would expect a sensitivity analysis over multiple physics configuration options, complemented by proper quantitative validation with respect to experimental data.
The manuscript raises some epistemological concerns: expressions like "simulation tunes", empirical adjustments of the value of Birks' constant hint to the simulation being calibrated rather than validated. It is unclear whether any validation of the simulation was performed, independently from its calibration. This issue should be clarified.
The manuscript does not identify the version of Geant4 used to produce the results it reports; this omission hinders the reproducibility (at least in principle) of the results, which is an essential feature of the experimental method. Due to the rapid evolution of Geant4, one would expect also some quantitative appraisal of the stability of the results, as this would be a relevant issue for a detector intended to operate over a relatively long time scale.
The bibliography is not consistent with TNS format. Reference to J. Allison et al. Geant4 Developments and Applications, IEEE Trans. Nucl. Sci., vol. 53, no. 1, pp. 270-278, 2006 is missing.
Senior Editor's Comments
D9 |
-
[Sphenix-electronics-l] [Wed 2:30 PM] meeting to address Review Reports for T-1044 2016 beam test paper,
Huang, Jin, 08/15/2017
- Re: [Sphenix-electronics-l] [Wed 2:30 PM] meeting to address Review Reports for T-1044 2016 beam test paper, Edward Kistenev, 08/16/2017
Archive powered by MHonArc 2.6.24.