Just starting this out... Tom, May 21

Data Storage Locations

What locations are in use and how they are used.

We have a number of storage locations for AGLT2:
  • /atlas/data08/dq2 --- Served via umfs02.grid.umich.edu (NFS or gridftp). Will be the primary location for usatlas data required for production use. Around 11T of total space.
  • /atlas/data16/dq2 --- Served via umfs05.aglt2.org (NFS or gridftp). Primary "overflow" location for usatlas data required for production use. Around 21T of total space.
  • /atlas/data15/mucal --- Served via dq2.aglt2.org (NFS or gridftp). Primary location for muon alignment and calibration data. Around 6.4T of total space.
  • /atlas/data14 --- Served via umfs04.aglt2.org (NFS or gridftp). Primary Tier-3 user space. Around 16TB of total space.
  • /atlas/data13 --- Served via umfs03.aglt2.org (NFS or gridftp). Secondary Tier-3 user space. Around 16TB of total space.
  • dCache --- Served via head01.aglt2.org or head02.aglt2.org or any Tier-2 node (as door). Around 38TB of total space (without resiliency). To be configured into 19TB of non-resilient and 9.5TB resilient space via pool allocation.

How-tos / FAQs

After the AOD/NTUPs replicated from BNL where found to be corrupted we needed to unsubscribe all USATLAS Tier-2's to these data streams. Alexei did this at the end of April. However at the AGLT2 we were left with almost 9TB of local AOD/NTUPs which needed to be cleaned up. These were located on the /atlas/data08/dq2 storage area at AGLT2.

The easy way to do this is to first identify the files in the LRC at Michigan (on umfs02.aglt2.org) and mark those files as archival='V' (volatile). Then the next time the 'cleanse.py' script is run on the /atlas/data08/dq2 storage area, all such files will be cleaned up. The MySQL command to do this is:

mysql> update t_meta set archival='V' where guid in (select guid from t_pfn where pfname like '%/atlas/data08/dq2%'
 AND (pfname like '%NTUP%' OR pfname like '%AOD%'));
Query OK, 226314 rows affected (25.90 sec)
Rows matched: 226314  Changed: 226314  Warnings: 0

This was run on May 30, 2007. Then two jobs on UMFS02 were started, one as 'usatlas1' and one as 'usatlas3' which just run 'cleanse.py' on /atlas/data08/dq2.

Initial disk usage on /atlas/data08 (May 30, 10 AM):
/dev/sda             10742052864 10423992548 318060316  98% /atlas/data08

Jobs started via 'screen' at 10:05 AM on May 30th.

Changing autopilot rates, etc Twiki page

List active subscriptions

Local automation

What scripts are run for clean-up etc.

-- TomRockwell - 21 May 2007
Topic revision: r7 - 16 Oct 2009, TomRockwell
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback