You are here: Foswiki>AGLT2 Web>MSUTier3 (23 Apr 2014, JamesKoll)Edit Attach

User Info for MSU Tier3

Regulations

Your usage of the cluster must conform with MSU's acceptible use statement http://www.msu.edu/au/

Privacy

The cluster is a multiple user system intended for nonconfidential uses such as physics research, as such, privacy is not guaranteed to users. Users' activity on the cluster may be recorded or monitored.

Users should not store confidential information on the system. This includes private email, records containing personal information, banking information (credit card receipts, or documents containing account numbers), MSU student grading information, MSU student information (files linking students' names and PIDs for instance).

Connecting

Login nodes are green.aglt2.org, white.aglt2.org, and blue.pa.msu.edu. Connect using SSH or the x2go client.

Passwords

Account passwords are maintained in Kerberos, in order to change your password, login to an interactive node (see above), and run the command "kpasswd". These machines use the same passwords as the HEP desktop cluster.

It can take up to 60 minutes for the new password to become active.

Storage

Information on the cluster storage systems is at MSUTier3Storage

Batch System - Condor

The batch system is Condor. User job submission is done from the logins node "green", "white", or "blue". The cluster has approximately 500 job slots spread across about 50 computers. If you are submitting more than about 100 jobs at once, some consideration of the jobs' resource usage and tuning may be be needed so that other's usage of the cluster won't be impacted. Please contact JamesKoll if you have questions.

Here are some local and remote references for Condor:

Applications

Email

Don't use this cluster for receiving email, it is not supported. Maintain a .forward file in your home directory so that locally generated email will reach you.

CVMFS, access to ROOT, Athena, Panda, etc.

Important analysis software is centrally available via CVMFS (Cern Virtual Machine FileSystem). This is a file system that is connected to a central repository at CERN and allows to access and cache serveral analysis software without installing them locally. This way a large variaty of software and recent versions are available. To setup cvmfs, use:
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/
source /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/user/atlasLocalSetup.sh

You have a new set of commands available, e.g.:
...Type localSetupAGIS to setup AGIS
...Type localSetupDQ2Client to use DQ2 Client
...Type localSetupEmi to use emi
...Type localSetupFAX to use FAX
...Type localSetupGanga to use Ganga
...Type localSetupGcc to use alternate gcc
...Type localSetupMana to setup mana
...Type localSetupPacman to use Pacman
...Type localSetupPandaClient to use Panda Client
...Type localSetupPyAMI to setup pyAMI
...Type localSetupPoD to setup Proof-on-Demand
...Type localSetupROOT to setup (standalone) ROOT
...Type localSetupRucio to setup Rucio
...Type localSetupXRootD to setup XRootD
...Type showVersions to show versions of installed software
...Type asetup to setup a release
...Type changeASetup to change asetup configuration
...Type rcSetup to setup an ASG release
...Type changeRCSetup to change rcsetup configuration
...Type diagnostics for diagnostic tools
...Type helpMe for more help
...Type printMenu to show this menu

Use _command_ -h to see which versions and options are available. There is always a default version availble, which does not need any addtional arguments (expect for athena and be aware that the version can change without notice and it might break your compiled programs).

If you want to use the commands in a script, you need to do a small workaround, since the commands above are actually are aliases. So e.g. you setup root like this in a script (the last line interactively would only be localSetupROOT):
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/
source /cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase/user/atlasLocalSetup.sh
source ${ATLAS_LOCAL_ROOT_BASE}/packageSetups/atlasLocalROOTSetup.sh

Proof-On-Demand

Proof is a way to analyse root files in parallel on a batch system. Proof-on-demand does not require an installation of proof on all the cluster nodes and the use can setup an ad-hoc proof cluster. More information you can find here:Proof and Proof-on-demand
ROOT - old information

A couple of versions of root are available below /msu/opt/cern/rootSL6. See the Root setup page for more details. Contact James Koll if you need a different version installed.

Tier 3 Presentations
https://www.aglt2.org/wiki/pub/AGLT2/MSUTier3/msut3resources_final.pdf

UsingPandaTools(pathena)

If you click on the topic you will find explanations on a very simple level, easy examples, etc.

-- TomRockwell - 29 Sep 2008
Topic revision: r26 - 23 Apr 2014, JamesKoll
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback