Skip to Content.
Sympa Menu

phys-npps-mgmt-l - Re: [[Phys-npps-mgmt-l] ] Fwd: "Shovel ready AI" proposals

phys-npps-mgmt-l AT lists.bnl.gov

Subject: NPPS Leadership Team

List archive

Chronological Thread  
  • From: Torre Wenaus <wenaus AT gmail.com>
  • To: Paul Nilsson <Paul.Nilsson AT cern.ch>
  • Cc: "mkirby AT bnl.gov" <mkirby AT bnl.gov>, NPPS leadership team <Phys-npps-mgmt-l AT lists.bnl.gov>, "tmaeno AT bnl.gov" <tmaeno AT bnl.gov>, Michel Hernandez Villanueva <mhernande1 AT bnl.gov>
  • Subject: Re: [[Phys-npps-mgmt-l] ] Fwd: "Shovel ready AI" proposals
  • Date: Tue, 15 Apr 2025 14:06:32 -0400

that too is possible, my current is
AI agents for HEP/NP: A new level of experiment automation and user knowledge

On Tue, Apr 15, 2025 at 1:53 PM Paul Nilsson <Paul.Nilsson AT cern.ch> wrote:

“AI Agents for HEP/NP: From Automation to Knowledge-Driven User Support..” ?

 

Paul

 

From: Torre Wenaus <wenaus AT gmail.com>
Date: Tuesday, April 15, 2025 at 19:38
To: Paul Nilsson <Paul.Nilsson AT cern.ch>
Cc: mkirby AT bnl.gov <mkirby AT bnl.gov>, NPPS leadership team <Phys-npps-mgmt-l AT lists.bnl.gov>, tmaeno AT bnl.gov <tmaeno AT bnl.gov>, Michel Hernandez Villanueva <mhernande1 AT bnl.gov>
Subject: Re: [[Phys-npps-mgmt-l] ] Fwd: "Shovel ready AI" proposals

I really agree with you Paul...

 

I’m more interested in log analysis than chatbots (we are going to get that for free so to speak)

 

The problem I've had with working directly on chatbots is that you do get that for free, and it is moving extremely rapidly, so it seems doubly pointless to work on the chatbot proper. A few months ago RAG was the way to extend an LLM to specialized knowledge, now comes MCP which obsoletes much of that. But MCP is working with your own system to integrate it with that fast moving AI ecosystem, it will apply to and leverage the 'LLM of the day'. It's a safe worthwhile investment. And if we get funding, we can apply effort to not just the MCP agent itself (Claude is capable of writing the MCP interface automatically given an API), but to what the MCP is serving up, ie what you're already doing or planning to do.

 

If you can think of a better title let me know. It is more than 'automation', it serves users as well with high quality information (ops, shifters, analyzers) in an AI context like an LLM, but how to say that in 2 words.

 

  Torre

 

 

On Tue, Apr 15, 2025 at 1:03PM Torre Wenaus <wenaus AT gmail.com> wrote:

I really appreciate the input! Some quick comments...

- has to be new and eyecatching, LLMs to improve doc doesn't cut it

- Kirby: yes! as Paul mentioned he's started some of this already, MCP provides a powerful context to do this and more

- Michel: great, will include Belle II ops/shift

- not supported in all models at present, not available in free commercial models, takes more GPU power than we have... all true but not blocking issues, if they give us money for a proposal, all are solved :-)

 

Dmitri wants a slide now, for a 5pm meeting with the bosses. Draft for your quick contributions and comments...

 

  Torre

 

 

On Tue, Apr 15, 2025 at 12:03PM Paul Nilsson <Paul.Nilsson AT cern.ch> wrote:

Hi,

 

A few ideas; In principle, the “Ask PanDA”-tool I’m working on could be turned into an “intelligent agent” using MCP. The new version is anyway plug-in based, so switching to a technology that is essentially doing just that for you is preferable. However, as far as I understand this doesn’t support all models yet and some are via third-party (or even wrappers), as seems to be the case for Llama which might still be a bit experimental. A limiting factor is also that we don’t have access to that many GPUs, unless we are going down the commercial road. I’m just getting set up at SLAC  to start testing Llama, using their GPUs (two I believe, ie the same situation as NERSC and BNL). As for CERN, they seem to be planning an LLM as a service, but it seems that is at a very early planning stage.

 

I think it’s good to get some experience with MCP technology and could be developed into something more than a chatbot – actually, personally, I’m more interested in log analysis than chatbots (we are going to get that for free so to speak), and we also have some ideas about the panda monitor, related to error messages, what they mean and so on. A worrying factor is, as I said, the limited number of available GPUs. Two GPUs only gives a handful of tokens per second at SLAC e.g., so hardly for production. But we need to start somewhere.

 

Cheers,

Paul

 

From: Kirby, Michael <mkirby AT bnl.gov>
Date: Tuesday, April 15, 2025 at 13:50
To: Torre Wenaus <wenaus AT gmail.com>
Cc: NPPS leadership team <Phys-npps-mgmt-l AT lists.bnl.gov>, tmaeno AT bnl.gov <tmaeno AT bnl.gov>, Paul Nilsson <Paul.Nilsson AT cern.ch>, Michel Hernandez Villanueva <mhernande1 AT bnl.gov>
Subject: Re: [[Phys-npps-mgmt-l] ] Fwd: "Shovel ready AI" proposals

 

Hi Torre,

 

The idea of having something like MCP cooked into services definitely seems like it would have significant advantages by making getting a “data source” connected via an easy plugin. If I understand correctly, MCP acts as a standardization interface for getting real-time data into an AI client? So for PanDA, you could take MCP to slurp in the current data on running jobs, queues, and then let a user ask the question “how soon will my jobs be finished?”, and Claude will tell them if they should get a coffee or go to dinner or go on vacation? That’s maybe oversimplified, but it’s absolutely a useful type of thing that would have a huge impact on user experience and/or operational decisions.

 

Cheers,

Kirby

 

On Apr 14, 2025, at 19:57, Torre Wenaus <wenaus AT gmail.com> wrote:

 

Hi all,

Please have a look at the following, tell me if you think it is crazy. Hong wants to have 1-2 pages describing how we could use existing effort in this FY in order to attract DOE AI money for this FY that they are looking to disperse quickly. I think that in an MCP directed plan as below we could identify time fractions of people in several areas of activity/expertise to get experimental MCP services in place and client(s) to use them, and deliver something useful in a short time. You can tell me whether I'm crazy, and tell me what we should be proposing to do for this FY25 scenario instead :-)

 

I include at the bottom the referenced material in the budget briefing.

 

  Torre

 

---------- Forwarded message ---------
From: Torre Wenaus <wenaus AT gmail.com>
Date: Mon, Apr 14, 2025 at 11:53
AM
Subject: Re: "Shovel ready AI" proposals
To: Denisov, Dmitri <denisovd AT bnl.gov>
Cc: Ma, Hong <hma AT bnl.gov>

 

Hi, 

These are some musings after a weekend trying out pieces of the emerging 'agentic ecosystem'.

 

A big domain that is about to open up (as big as LLM tech itself, I think) is active agents representing services that can be interrogated and used by software, LLMs and other human interfaces to create intelligent assistants/actors with powerful capability to act in real time on completely current information, info direct from and curated by the services themselves (no hallucinations from garbage-in data, no blindness to the time since training). Anthropic established an open source protocol, Model Context Protocol (MCP) a few months ago and it is taking off fast. In my opinion it is worth directly getting into now, much more so than playing around with LLM chatbots and extensions like RAG. It is the sort of tech that can be the real basis for "end to end integrated AI from collider to detector to analysis" for EIC, for greatly increasing intelligent automation and user-facing capability in the systems and services we develop like PanDA, etc. This is facility/ops, it is autonomous control of accelerators/experiments, it fits to where the funding apparently is. It is shovel-ready, I think we could start on this instantly in contexts like PanDA, EIC streaming readout, and the ops environments of running experiments if they were willing to explore the possibilities. It is work that should have significant short-term return from moderate time investment so we could put existing people on it without killing their useful productivity.

 

I'll be talking to people about this but I haven't yet. If you want to include something about this in what you're preparing this week, let me know what you'd like and I'll fast-track the discussing, see whether others agree :-)

 

  Torre

 

On Fri, Apr 11, 2025 at 4:35PM Denisov, Dmitri <denisovd AT bnl.gov> wrote:

Thank you for forwarding!

 

From: Ma, Hong <hma AT bnl.gov>
Sent: Friday, April 11, 2025 4:34 PM
To: Torre Wenaus <wenaus AT gmail.com>
Cc: Denisov, Dmitri <denisovd AT bnl.gov>
Subject: FW: "Shovel ready AI" proposals

 

Hi Torre,

 

                If you have ideas about shovel ready AI projects, please let us know, or  work with the other group leaders.

                This can potentially recover some of the lost research funding.

 

                Best,

 

                Hong.

 

From: Denisov, Dmitri <denisovd AT bnl.gov>
Date: Friday, April 11, 2025 at 4:25
PM
To: Rajagopalan, Srini <srinir AT bnl.gov>, Begel, Michael <begel AT bnl.gov>, Kettell, Steven <kettell AT bnl.gov>, sallydawsonbnl AT gmail.com <sallydawsonbnl AT gmail.com>, Slosar, Anze <anze AT bnl.gov>, Jaffe, David <djaffe AT bnl.gov>
Cc: Kotcher, Jonathan <kotcher AT bnl.gov>, Ma, Hong <hma AT bnl.gov>, Deshpande, Abhay <abhay AT bnl.gov>
Subject: "Shovel ready AI" proposals

Folks,

 

As you did hear OHEP is interested in “shovel ready” AI proposals to at least partly compensate FY25 reductions. You can read slides 11-12 from Alan’s recent summary (attached) – this is all we know about this topic for now. In order to get organized and as we are meeting with JoAnne on this topic next Tuesday, I suggest all of you consider what activities in your groups could be “recolored” as AI or become AI with little changes, so the funding can start in FY25.

 

We will need by next Tuesday morning a few initial examples (1-3 from each area):

 

1.            Title and Abstract (a few sentences, not long).

2.            Availability to start activities in FY25.

3.            Approximate duration and funding required (for the full duration of the activities and in FY25). I recommend the full time is 1-2 years, not more.

 

If you send me your drafts by Tuesday, April 15 morning, I’ll combine them into presentable document to discuss with the lab.

 

Many thanks, Dmitri.


wAre there opportunities for additional funding in FY 2025?

Yes, with successful proposals to Hardware-Aware AI and Early Career Research

Maybe. In AI. We will be receptive to shovel ready, well-coordinated AI projects

oIncluding Facilities/Operations 

 

 

wCosmologists and particle physicists are early adopters and developers of AI

An epoch of advanced AI is an epoch of discoveries in fundamental physics and cosmology

wHEP is built on statistical analyses of PB-scale data to test theory

HEP requires a deep understanding of probabilities in the interpretation of data

This requirement is driving development of AI that can handle probabilistic rigor

oProbabilistically rigorous AI would be a game changer for many fields of science, in addition to being invaluable for applications beyond science

PCAST Report on Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges

wHEP uniquely pushes development of AI at the edge and real time applications

Autonomous control of particle accelerators and detectors

AI for applications in remote power constrained environments

oLow-power high performance AI is most likely to be advanced by HEP

wHEP trains an AI ready workforce with experience in real-world applications using AI on PB scale data

 

Michael Kirby (he/him/his)

Senior Physicist

 

Brookhaven National Laboratory

Cell: +1 630 965 1456


 


 

--

-- Torre Wenaus, BNL NPPS Group Leader, ATLAS and ePIC experiments

-- BNL 510A 1-222 | 631-681-7892


 

--

-- Torre Wenaus, BNL NPPS Group Leader, ATLAS and ePIC experiments

-- BNL 510A 1-222 | 631-681-7892



--
-- Torre Wenaus, BNL NPPS Group Leader, ATLAS and ePIC experiments
-- BNL 510A 1-222 | 631-681-7892



Archive powered by MHonArc 2.6.24.

Top of Page