ATLAS Computing and Muon Calibration Center


SuperComputing 2014:
One Server, 100 Gbps over the WAN for ATLAS/LHC
Software Driven Dynamic Hybrid Networks With Terabit/sec Science Data Flows

More information

Materials and photos from the HEPiX Fall 2013 Workshop at AGLT2 UM.

Current Statistics

We have 4854 Condor jobs (3316 running on 6760 cores, 1531 idle, 7 held)

Total Slots 3536, Cores 6879

Job status page

ATLAS Computing


AGLT2 provides more than 6600 CPU cores and 3.5 Petabytes of storage for ATLAS physics computing. Site infrastructure services for job management, storage management, and interfacing with the ATLAS computing cloud are managed at UM and computing/storage resources are located both at UM and at MSU.

To outside users of our site we appear as one entity. The collaboration between UM and MSU allows our site to provide double the resources than would otherwise be possible and increase our redundancy in the event that either site is unavailable.

ATLAS Muon System Calibration


To determine calibration compensations for ATLAS Monitored Drift Tubes a special data stream is sent to calibration centers in Michigan, Rome, and Munich.

Storage, CPU, and human resources at AGLT2 are dedicated to MDT calibration including dedicated hosts to process calibration data, a database replicated to CERN, and custom AGLT2 authored tools. The ATLAS calibration process occurs every day there is a beam in the LHC.



The Dynamic Network System (DYNES) is a nationwide cyber-instrument spanning about 40 US universities and 11 Internet2 connectors.

A collaborative team including Internet2, Caltech, University of Michigan, and Vanderbilt University will work with regional networks and campuses to support large, long-distance scientific data flows in the LHC and other leading programs in data intensive science.

UM's roles in the collaboration include designing provisioning systems for network switches and computer hardware, provisioning network hardware for sites, and centralized monitoring/control infrastructure.


Advanced Network Services for Experiments

Building on the successful deployment of the DYNES instrument at dozens of campuses, the newly funded (summer 2012) ANSE project will complete the network integration with end-user applications and in particular the software stacks of the HEP experiments.

The ATLAS specific part of the project includes integrating a “network element” into PANDA, the ATLAS workload management system, and enabling scheduling of network resources in conjunction with the scheduling of other resources currently handled by PANDA.


We envision distributed computing environments where resources can be deployed easily and flexibly to meet the demands of data intensive science.

The project is a collaborative effort between Caltech, University of Michigan, University of Texas Arlington, and Vanderbilt University.

SuperComputing 2014


One Server, 100 Gbps over the WAN for ATLAS/LHC

An international team from Caltech, University of Michigan, and University of Victoria demonstrated a data transfer architecture based on a single server capable of meeting the WAN transfer needs of computing centers connected at 100Gbps.

Intelligent Software Driven Dynamic Hybrid Networks

Our team, together with teams from SPRACE Sao Paulo, FIU, Vanderbilt and other partners worked together to smash previous records for data transfers using software defined networking (SDN)

ATLAS - A Toroidal LHC ApparatuS


ATLAS is a physics detector inside the Large Hadron Collider located at CERN in Geneva, Switzerland. Our site provides computational power to process ATLAS data.