Installation and Configuration of Dell MD3460 Storage

Basic Hardware

This page refers specifically to hardware purchased in August 2016 using RBD 2016 funds. A single storage node consists of
  • Dell R730xd with 2 SAS 12Gb interface cards
  • Dell MD3460 with 60 8TB Helium hard drives
    • Each controller card has an RJ45 jack for the management interface
    • These come pre-configured on a specific IP address, so do not connect more than one MD3460 at a time to the network until the IP is changed to that which you desire to use.
  • Dell MD3060e with 60 8TB Helium hard drives, and daisy chained from the MD3460 using 12Gb to 6Gb cables
    • As with the MD3460, there are 2 management interfaces that need to be configured.

Needed Software

The MD3460 comes with a resource CD containing the MDSM software suite, version 11.25.0A06.0003-1. Earlier versions may not work with this new hardware. This can be installed on a PC connected to the network. The .iso is also available for download from the Dell website. This CD for Linux (you can run SMIA_LINUX64.bin) contains 5 rpms
  • SMruntime
  • SMutil
  • SMesm
  • SMagent
  • SMclient
In addition, the host R730 should have the following rpms installed
  • device-mapper-multipath
  • device-mapper-multipath-libs
  • glibc
  • iscsi_initiator_utils
And this group of rpms (via yum groupinstall as SMclient brings up a dashboard)
  • X Window System
Make sure that /etc/multipath.conf understands the MD3460.

The file /var/opt/SM/LAUNCHER_ENV should have been created. If not, it contains only a single line
BASEDIR=/opt/dell/mdstoragemanager

Configuring IP Addresses on the Storage Units

Each MD3460 comes with a pair of management ports for out of band access. They default to 192.168.128 or 129.something. Depends a bit on the model. So, get 2 up on the network at once and they fight. We go into the mini-usb with a serial connection for the MD3460 (baud rate 115200) and set the network IP and mask and gateway, and the MD3060e has an old-style PS2 input for the same thing (baud rate 38400). Cables came with each to go from DB9 to the appropriate end connector. Use your favorite terminal connection method. I used putty through a serial port on my laptop.

The MD3460 requires a login, send a "break", type S, and the magic word is supportDell. For the MD3060e you are just in a key letter driven menu once you connect.

Further Storage Setup

With the IP addresses properly configured, you can run SMclient on the R730. Search for available devices (it will find all of them on the network). Select the one you are working with, and under the "Setup" tab you can "Rename Storage Array" and "Set a Storage Array Password". We named ours after the machine it is on, umfs06, as MD3460_UMFS06.

All the auto-magically discovered arrays were in-band only. You should also set it up for out-of-band management, to whit
  • Right-click on the "Discovered Storage Arrays" item in the left column of the SMclient "Devices" tab, and select "Add Storage Arrays".
  • In the first 2 fields, enter the IP addresses assigned to the management ports, click "Add", and there it is, out of band management.

Now you should be ready to create virtual disks on the storage.

Creating the Virtual Disks

The simplest way to create the virtual disks is to use the SMcli command line interface. We run this with both IP of the MD3460 specified on the command line. To create 12 RAID-6 10-disk arrays, spread across all drawers so as to protect the arrays from loss of a single drawer (we actually saw this happen once) run the following command

SMcli 192.168.48.140 192.168.48.141 -c "autoConfigure storageArray physicalDiskType=SAS raidLevel=6 diskGroupWidth=10 diskGroupCount=12 virtualDisksPerGroupCount=1 hotSpareCount=0 segmentsize=128;" -p your_password

Note that these commands do NOT run quickly. They are not "get a cup of coffee" slow, but they are definitely "I am getting impatient, why is nothing happening" slow.

The above command initiates 12 VDisk creation tasks. Only 8 will run at once, and they will take about 100hrs each (yes, that is correct; there is a LOT of storage here). As each completes, the queued tasks will start. You can find out about those queued tasks as follows:

SMcli 192.168.48.140 192.168.48.141 -c "show storageArray longRunningOperations;" -e -p your_passwd
Executing script...


Long Lived Operations:

   LOGICAL DEVICES  OPERATION       STATUS         TIME REMAINING
   dcache6          Initialization  66% Completed  20 hr, 48 min
   dcache7          Initialization  67% Completed  20 hr, 28 min
   dcache8          Initialization  57% Completed  26 hr, 5 min
   dcache9          Initialization  56% Completed  26 hr, 41 min
   dcache10         Initialization  Pending        Not Available
   dcache11         Initialization  Pending        Not Available

Script execution complete.

SMcli completed successfully.

Each vdisk can be created independently of the others by SMcli commands such as this one

SMcli 192.168.48.150 192.168.48.151 -c "create virtualDisk physicalDiskCount=10 physicalDiskMediaType=HDD physicalDiskType=SAS raidLevel=6 userLabel=\"dcache11\" segmentSize=128 drawerLossProtect=True;" -p your_pw

Attaching the vdisk to an LUN and Renaming Them to Something Reasonable

After the vdisks are ready you must connect each to an LUN to allow access by the host system. Use SMclient to do this. Under the "Devices" tab of the SMclient dashboard, double click the appropriate storage array to open a screen for it. Go to the "Host Mappings" tab, and either open the "Undefined Mappings" group on the left, or the "Default Group" on the left. With the former, you will see each individual vdisk, with "LUN ?" indicated, or with the latter you right-click the Host and select "Add LUN Mapping".

At this point, it is a good idea to reboot the R730 to get clean service starts.

Following the reboot, you will find that it is a good idea to give real names to the vdisks. For one thing, the script I've attached (see below about that) does not much like serially numbered vdisk names, which is what the above SMcli command to create all 12 vdisk at once gives to you. If you create them individually then they will already have whatever nice name you gave to them.

The GUI gives you a rename option for the vdisk, but in our instance it was greyed out (perhaps because we set a password? Not sure.). So, do this using SMcli.

SMcli 192.168.48.140 192.168.48.141 -c "set virtualDisk [\"1\"] userLabel=\"dcache\";" -e -p your_passwd

Run this for all 12 vdisk. I found the system seemed to complain if I did these serially from 1-12, so I first did all the odd numbered vdisk, then the even numbered.

Actually, you can either rename the vdisks first, or add the LUN mapping, the order does not matter.

Creating file systems

We have used xfs as our file system of choice. Following the full initialization, create each of the 12 file systems, for example:
mkfs.xfs -d su=128k,sw=8 -L umfs06_2 -f /dev/mapper/mpathc

This creation has a whine about mis-matched parameters. See "Issues" below.

Of course, you need to know which multipath device to use in such a command. I have attached a useful script that we use for both standard Dell storage such as the MD1200, and this multipath accessed storage. Output from this contains lines such as:
Device mpathc on MD3460_UMFS06, labeled dcache1 contains dcache1

Mounting the file system

Add to /etc/fstab lines such as this
LABEL=umfs06_2   /dcache1   xfs   defaults,inode64,noatime   1 2

Issues

We have not yet tuned either the segmentsize sent to SMcli, or the mkfs.xfs parameters. With those above, as we balance our dCache storage, we are getting about 1GB/s, on average, flowing into the storage with the R730 on either 2x10Gb or 4x10Gb attachment into the network.

Other Tasks

Alerts can be enabled globally or per-shelf from the SMclient dashboard. One R730 MDSM instance can monitor all of the storage in your network.

Play around in the dashboard to see what else can be seen.

Some useful commands

See the online manual for SMcli to get a full help on available commands.

SMcli 192.168.48.140 192.168.48.141 -c "show storageArray longRunningOperations;" -e -p your_pw
SMcli 192.168.48.140 192.168.48.141 -c "show storageArray summary;" -e -p your_pw
SMcli 192.168.48.140 192.168.48.141 -c "show virtualDisks;" -e -p your_pw
SMcli 192.168.48.140 192.168.48.141 -c "show storageArray summary;" -e -p  your_pw

A useful script

[root@umfs06 ~]# cat tools/storage_details.sh
#!/bin/bash
#
# Detail what we know about these controllers
# Loop over more controllers than we will ever have
# Does not (yet) work on the new MD3260/3060E storage shelves :(
#
for (( i=0 ; i<5 ; i++ )) ; do
  omreport storage controller controller=$i 2>&1 > /dev/null
  if [ $? -ne 0 ] ; then
    continue
  fi
#
# filter on acceptable controllers
#
  contType=`omreport storage controller controller=${i} | grep ^Name | head -1 | awk '{print $3 " " $4}'`
#
# Known Controller types at AGLT2 are (at least):
#  PERC 5
#  PERC 6/i
#  PERC 6/E
#  PERC H800
#  PERC H300
#  6Gbps SAS
#
#  case $contType in
#    "PERC 6/E" | "PERC H800" | "6Gbps SAS" )
       echo ""
       echo "======= Controller $i Information ======="
       omreport storage controller controller=$i | grep "Controller\ "
       omreport storage controller controller=$i info=pdslotreport|grep -A 2 ^Details|grep -v "\-\-"
#
#  Get the list of valid vdisk ids on this controller
## omreport storage vdisk controller=2
##  Gets lines like:
##  ID                  : 0
#
       vdlist=""
       vdiskCnt=0
       while read vd ; do
         echo $vd | grep -q ^ID
         if [ $? -eq 0 ] ; then
           newVd=`echo $vd | tr -d '[[:space:]]' | awk -F : '{print $2}'`
           vdlist="${vdlist}${newVd} "
           (( vdiskCnt++ ))
         fi
       done < <(omreport storage vdisk controller=$i)
       echo "--- There are ${vdiskCnt} active vdisks on this controller"
#
#  Assemble the list of pdisks in each vdisk
## omreport storage pdisk controller=1 vdisk=1
##  Gets lines like:
##  List of Physical Disks belonging to ost12
##  ID                        : 0:0:5
#
       for conDisk in $vdlist ; do
         while read pd ; do
           isNewVD=`echo $pd | grep -q "belonging\ to"`
           if [ $? -eq 0 ] ; then
             pdlist=""
             pdname=`echo $pd | awk '{print $7}'`
             echo -n "    VD $conDisk Named: $pdname is "
             continue
           fi
           isNewPD=`echo $pd | grep -q ^ID`
           if [ $? -eq 0 ] ; then
             pdlist="${pdlist}`echo $pd | awk -F : '{print $2 ":" $3 ":" $4}'` "
           fi
         done < <(omreport storage pdisk controller=$i vdisk=$conDisk)
         echo $pdlist
       done
#       ;;
#
#    * )
#       echo ""
#       echo "======= Skip Controller $i of Type $contType ======="
#       ;;
#  esac
done
#
# See if the multipathd is running.
# ASSUME it will only be running on our MD3260 systems
# If it is, loop over all defined virtual disks, excluding system
# disks, and print information about them
#
chkconfig --list | grep multipathd | grep -q on
if [ $? -ne 0 ] ; then
  exit 0
fi

myName=`hostname -s`
myUpperName=`echo $myName | tr [:lower:] [:upper:]`
echo ""
echo "======= Multipath Information ======="
#
# foundStr will keep track of which volumes were already found
#
foundStr=""
SMdevices | grep ${myUpperName} | grep -v Access |
while read line ; do
  dName=`echo $line | awk '{print $8}' | cut -d , -f 1`
  echo "\"$foundStr\"" | grep -q "${dName}\s"
#
#   Skip this one if this volume was already found
#
  if [ $? -eq 0 ] ; then
    foundStr="${foundStr}${dName} "
    continue
  fi
#
  array=`echo $line | awk '{print $5}'`
  foundStr="${foundStr}${dName} "
  devDev=`echo $line | awk '{print $1}'|awk -F / '{print $3}'`
  mpDev=`multipath -ll|grep -B 8 "\s${devDev}\s"|grep mpath|tail -1|awk '{print $1}'`
  device="/dev/mapper/${mpDev}"
  xfs_admin -l ${device} 2> /dev/null > /dev/null
  if [ $? -ne 0 ] ; then
    hasLabel="NO File System"
  else
    hasLabel=${dName}
  fi
  echo " Device ${mpDev} on ${array} labeled ${dName} contains ${hasLabel}"
done

-- BobBall - 14 Sep 2016

This topic: AGLT2 > WebHome > MD3460Install
Topic revision: 19 Sep 2016, BobBall
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback