Skip to Content.
Sympa Menu

usatlas-hllhc-computing-l - [Usatlas-hllhc-computing-l] First meeting of the distributed training working group

usatlas-hllhc-computing-l AT lists.bnl.gov

Subject: US ATLAS HL-LHC computing discussion

List archive

Chronological Thread  
  • From: Torre Wenaus <wenaus AT gmail.com>
  • To: usatlas-hllhc-computing-l AT lists.bnl.gov
  • Subject: [Usatlas-hllhc-computing-l] First meeting of the distributed training working group
  • Date: Tue, 7 Aug 2018 16:42:17 +0200

Hi,
We are planning a first meeting of the distributed training working group, one of the working groups defined at last month’s US ATLAS / CSI workshop at BNL. If you’re interested in attending please fill in the doodle:
https://doodle.com/poll/iayykxwd94isqdf4
The distributed training WG is to examine the scaling out of ML training across distributed/parallel resources in order to minimize the turnaround time on network tuning and ML studies. The technical approaches to be looked at include those discussed in the workshop; cf. the talks of Abid (Horovod), Alexei (PanDA), and Amir.  It was agreed at the workshop to define concrete objectives for the WG by the end of September, so that will be the main topic of this meeting.
  Abid & Torre




Archive powered by MHonArc 2.6.24.

Top of Page