Running ATLAS software on SLC43 x86_64

We had installed the mars01.cern.ch-mars05.cern.ch systems with SLC43 and an experimental kernel. Though we could install the ATLAS software kits we were unable to successfully start 'athena' or run the kit-validation. The problems we found were:
  • Not all needed compatibility libraries were present
  • Not all needed 32 bit (i[3|6]86 versions) RPMS were installed
  • The LD_LIBRARY_PATH that cmt was setting up included a "-slc4-" in the directory names, even though that doesn't exist.
  • A newer version of a Castor rpm had introduced an incompatible libshift version.

To fix these problems I created a simple bash script to setup for ATLAS software:

#!/bin/bash
#
# Setup SLC4 x86_64 to allow ATLAS software to run
#
echo "This script updates/modifies and SLC4 x86_64 installation to work with ATLAS software..."

echo "Installing all the 'compat' libraries..."
yum install compat-glibc compat-gcc-32 compat-glibc-headers compat-gcc-32-c++ compat-libstdc++-33 compat-libstdc++-33.i386 compat-glibc.i386

# Lie about the linux version
echo "Modifying /etc/redhat-release to fake this system as slc3 instead of slc4"
sed -i.orig -e 's/4./3./' /etc/redhat-release

# get 32 bit versions of some software
yum install libf2c.i386 openssl.i686

# Adding "old" libshift to /usr/lib
echo "Adding old libshift to /usr/lib..."
cp -v /afs/atlas.umich.edu/i386_linux24/usr/lib/libshift.so.2.0.2.1 /usr/lib/
rm -fv /usr/lib/libshift.so.2.0
ln -vs /usr/lib/libshift.so.2.0.2.1 /usr/lib/libshift.so.2.0
rm -fv /usr/lib/libshift.so
ln -vs /usr/lib/libshift.so.2.0.2.1 /usr/lib/libshift.so
/sbin/ldconfig

echo "Should work now...finished"

This seems to work. Running directly from AFS on mars01.cern.ch worked correctly. The installed kit still has some minor issues. The kludge of re-writing the /etc/redhat-release is distasteful and could have unexpected repurcussions. Ideally we should be able to pass some argument or environment variable which can force 'slc3' instead of 'slc4'.


Installing ROCKS V4.1 on UMOPT1 (dual dual-core Opteron 270 node, x86_64 version)

We have received four new worker nodes that we want to setup to run ROCKS V4.1 on, analogously to our UMROCKS (athlon, i686) cluster. All the new nodes, as well as UMOPT1 have Supermicro H8DAR-T motherboards, dual Broadcom 5704 NICs, IPMI cards (using LAN port 0) and 8 GB of RAM.

Our plan is to install ROCKS 4.1 x86_64 on UMOPT1 after saving any existing files we want to preserve. We want to "reverse" the default eth0/eth1 mapping so that LAN PORT 0 (connected to the IPMI card) becomes the PUBLIC (eth1) NIC. The eth0/private NIC should be LAN PORT 1 and will be used to install the worker nodes. A way to do this AFTER the install is documented here.

Then we will connect each worker node to the network on its "LAN port 1" and enable PXE on this NIC. The worker nodes are called or001-or004 (to identify them) right now. The current mapping of cables to switch ports is a follows:
  • UMOPT1
    • LAN_PORT_0 (green cable) -> nile:Gi3/3 (Want this to be eth1: 141.211.43.112 and an IPMI address TBD)
    • LAN_PORT_1 (blue cable) -> nile:Gi3/4 (Want this to be eth0: 10.1.1.1 Assigned by ROCKS)
  • or001
    • LAN_PORT_0 (green cable) -> nile:Gi3/5 (eth1: 192.168.10.1)
    • LAN_PORT_1 (blue cable) -> nile:Gi3/6 (eth0: Assigned by ROCKS)
  • or002
    • LAN_PORT_0 (green cable) -> nile:Gi3/7 (eth1: 192.168.10.2)
    • LAN_PORT_1 (blue cable) -> nile:Gi3/8 (eth0: Assigned by ROCKS)
  • or003
    • LAN_PORT_0 (green cable) -> nile:Gi3/9 (eth1: 192.168.10.3)
    • LAN_PORT_1 (blue cable) -> nile:Gi3/10 (eth0: Assigned by ROCKS)
  • or004
    • LAN_PORT_0 (green cable) -> nile:Gi3/11 (eth1: 192.168.10.4)
    • LAN_PORT_1 (blue cable) -> nile:Gi3/12 (eth0: Assigned by ROCKS)

Prior to installation we must make sure we have configured the IPMI cards in each node. We will use a floppy to automatically install each system.

Also we must pass the right arguments to the ROCKS V4.1 CD to insure that the headnode (UMOPT1) LAN_PORT_1 is "eth0" (private) and LAN_PORT_0 is "eth1" (public). The ROCKS DNS domain on the private network is .local. We will setup the default "compute" appliance to have the short name "c--". All nodes are in Rack15.

MySQL Query Cache setup

mysql> show variables like 'query_cache%';
+------------------------------+---------+
| Variable_name                | Value   |
+------------------------------+---------+
| query_cache_limit            | 1048576 | 
| query_cache_min_res_unit     | 4096    | 
| query_cache_size             | 0       | 
| query_cache_type             | ON      | 
| query_cache_wlock_invalidate | OFF     | 
+------------------------------+---------+
5 rows in set (0.00 sec)

mysql> set global query_cache_size = 60000000;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like 'query_cache%';
+------------------------------+----------+
| Variable_name                | Value    |
+------------------------------+----------+
| query_cache_limit            | 1048576  | 
| query_cache_min_res_unit     | 4096     | 
| query_cache_size             | 59999232 | 
| query_cache_type             | ON       | 
| query_cache_wlock_invalidate | OFF      | 
+------------------------------+----------+

-- ShawnMcKee - 07 Jun 2006
Topic revision: r5 - 16 Oct 2009, TomRockwell
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback