mono-spaced text indicates terminal output
bold mono-spaced text indicates terminal input (by the user)
mono-spaced text in italics indicates variable data that may differ on your installation
> prompt indicates commands that do not require a specific shell or root access
# prompt indicates commands that require root access
$ prompt indicates sh- or bash-specific commands
% prompt indicates csh- or tcsh-specific commands
You have installed a subset of VDT version 1.3.10b:
CA Certificates v13 (includes IGTF 1.1 CAs)
EDG CRL Update 1.2.5
EDG Make Gridmap 2.1.0
Fault Tolerant Shell (ftsh) 2.0.12
Generic Information Provider 1.0.15 (Iowa 15-Feb-2006)
Globus Toolkit, pre web-services, client 4.0.1
Globus Toolkit, pre web-services, server 4.0.1
Globus Toolkit, web-services, client 4.0.1
Globus Toolkit, web-services, server 4.0.1
GLUE Schema 1.2 draft 7
GPT 3.2
Java SDK 1.4.2_10
KX509 20031111
Logrotate 3.7
MonALISA 1.4.12
MyProxy 3.4
MySQL 4.1.11
PPDG Cert Scripts 1.7
PRIMA Authorization Module 0.3
RLS, client 3.0.041021
UberFTP 1.18
Virtual Data System 1.4.4
gsiftp globus-gatekeeper
edg-crl-upgraded gris globus-ws mysql (used by globus-ws) condor (setup optionally by ManagedFork) MLD (setup by configure-osg.sh)
mis (MIS-CI) root (addition of entry for vdt log rotation)
gsiftp 2811/tcp # Added by the VDT globus-gatekeeper 2119/tcp # Added by the VDT
pacman -get http://vdt.cs.wisc.edu/vdt_1310_cache:CondorThe installation will ask for local storage space for 3 separate purposes:
$ export VDTSETUP_CONDOR_LOCATION=$CONDOR_ROOTwill suffice (provided $CONDOR_ROOT is set, per the usual Condor config).
/etc/hosts.{allow,deny}), you'll need to review the
Firewall section of this guide and arrange for appropriate Internet access.
myvo:50 yourvo:10 anothervo:20 local:20Policy URL: This is the URL for the document describing the usage policy/agreement for this resource.
http://geotags.com/
or
you can search your location on Google
For USA:
cd to an install directory somewhere. Then:> wget http://physics.bu.edu/pacman/sample_cache/tarballs/pacman-3.16.1.tar.gz
> tar --no-same-owner -xzvf pacman-3.16.1.tar.gz
Next, setup your environment:> cd pacman-3.16.1
For sh and bash shells:$ source setup.sh
For csh and tcsh shells:% source setup.csh
> mkdir INSTALLATION_DIRECTORY (e.g. /usr/local/osg) > cd INSTALLATION_DIRECTORY > pacman -get OSG:PACKAGE_NAME > pacman -pretend-platform:RHEL-3 -get OSG:ce > pacman -platforms
> pacman -allow save-setup
To attempt recovery of the setup.sh file execute the command: > pacman -setup
> pacman -remove package_name
For details on these commands, see the Pacman home page.
> export VDT_LOCATION=/usr/local/grid > cd $VDT_LOCATION
> pacman -get OSG:ceSee the Pacman section of this guide if you encounter an 'unsupported' platform message. This will take between 10 and 60 minutes to complete, depending upon the system and network connection. During this time you may open a second terminal and watch the progress by monitoring the $VDT_LOCATION/vdt-install.log file. You should not be asked any other questions during the installation process. The installation should complete with the following message.
Downloading [srmclient-1.23.tar.gz] from [pacman.uits.indiana.edu]...
6/6 MB downloaded...
Untarring [srmclient-1.23.tar.gz]...
Downloading [ml-patch.tar.gz] from [pacman.uits.indiana.edu]...
Untarring [ml-patch.tar.gz]...
Pacman Installation of OSG-0.4.1 Complete
$ source setup.sh
or
% source setup.csh
> pacman -get OSG:Globus-Condor-Setup > pacman -get OSG:Globus-PBS-Setup > pacman -get OSG:Globus-LSF-Setup > pacman -get OSG:Globus-SGE-SetupFor OSG-ITB substitute IVDGL for OSG:
> pacman -get iVDGL:Globus-Condor-Setupand so forth. This guide will not go into the detail on the installation of any of these optional packages.
VDTSETUP_CONDOR_LOCATION: the location of
your Condor installation (e.g. /opt/condor). The Condor
bin/,
sbin/, etc/, lib/... directories should be
directly under this location.
VDTSETUP_CONDOR_CONFIG (optional): The location of your
Condor configuration file (if non-standard). Default is
${VDTSETUP_CONDOR_LOCATION}/etc/condor_config.
cd $VDT_LOCATION source $VDT_LOCATION/setup.sh pacman -get OSG:ManagedFork
source $VDT_LOCATION/setup.sh $VDT_LOCATION/vdt/setup/configure_globus_gatekeeper --managed-fork y
By default, the managed fork jobmanager will behave just like the fork jobmanager.
If you wish to restrict it you need to modify your local Condor configuration.
If you're using Condor from the VDT this can be done by editing
$VDT_LOCATION/condor/local.<hostname>/condor_config.local.
Only allow 20 local universe jobs to execute concurrently:
START_LOCAL_UNIVERSE = TotalLocalJobsRunning < 20
Set a hard limit on most jobs, but always let grid monitor jobs run (strongly recommended):
START_LOCAL_UNIVERSE = TotalLocalJobsRunning < 20 || GridMonitorJob == TRUE
source $VDT_LOCATION/setup.sh $VDT_LOCATION/vdt/setup/configure_globus_gatekeeper --managed-fork n
> $VDT_LOCATION/vdt/setup/setup-cert-request Reading from /g3dev/globus/TRUSTED_CA Using hash: 1c3f2ca8 Setting up grid-cert-request Running grid-security-config... ......A list of CAs were added as authorized CAs on your system. Important Note: In the past, you had the option of putting these certificates in the /etc/grid-security/certificates directory or in the local $VDT_LOCATION/globus/TRUSTED_CA directory. This is no longer an option. The OSG installation is pre-configured to place the certificates in the local TRUSTED_CA directory. The edg-crl-upgrade daemon will be updating CRLs in this local directory only. If you want to maintain certificates in the /etc/grid-security/certificates directory you should link it to the local TRUSTED_CA location (symlink in either direction) and copy the CA files appropriately. Please review the list of authorized CAs and modify the set in $X509_CERT_DIR= $VDT_LOCATION/globus/TRUSTED_CA as needed to match your local policy. The daemon edg-crl-upgrade should be running at all times in the background to refresh the CRLs from these CAs. If CRLs are not kept current, incoming connections will fail. To check that it's running:
> ps axwww | grep edg-crl-upgradeIf not running, do
> /etc/init.d/edg-crl-upgraded startThe Certificate Scripts Package Guide which has been installed can assist you with choosing Certificate Authorities to trust and and periodically checking that the CRLs (Certificate Revocation Lists) have not expired.
$ cd $VDT_LOCATION
$ source ./setup.sh
$ cert-request -ou s -dir .
Processing OU=Services request.
Give reason (1 line) you qualify for certificate, such as
member of CMS experiment or
collaborating with Condor team, etc.
reason: testing for PPDG
input full hostname: goofy.looney.tunes
Generating a 2048 bit RSA private key
. ....................................+++
..................+++
writing new private key to './5842key.pem'
-----
input your email address: address@your.email.server
input your complete phone number: 9995551212
Choose a registration authority to which you are affiliated.
_Enter__this____for this registration authority
anl ANL: Argonne National Lab
epa NCC-EPA: Environmental Protection Agency
esg ESG: Earth System Grid
esnet ESnet: DOE Science network
fnal FNAL: Fermilab host and service certificates
ivdgl iVDGL: see www.ivdgl.org
lbl LBNL: Berkeley Lab
lcg LCG: LHC Computing Grid
nersc NERSC: computer center, see www.nersc.gov
o Other: if you can not tell what to choose
ornl ORNL: Oak Ridge National Lab
pnnl PNNL: Pacific Northwest National Lab
ppdg PPDG: includes BNL, JLab, SLAC and many HEP & NP experiments
(choose from left column): ppdg
ppdg
PPDG
You must agree to abide by the DOEGrids policies,
at http://www.doegrids.org/Docs/CP-CPS.pdf
Do you agree (y,N): y
Your Certificate Request has been successfully submitted
Your Certificate Request id: 2005
You will receive a notification email from the CA when your certificate
has been issued. Please disregard the instructions to download your
certificate though a web browser and use the cert-retrieve script instead.
After the certificate is approved you will receive an email which includes the serial number of the new certificate. Use that serial number to retrieve the certificate and move it to the installation directory.
> cert-retrieve -dir . -certnum 0x299 using CA doegrids Checking that the usercert and ./5842key.pem match writing RSA key ./usercert.pem and ./userkey.pem now contain your Globus credential > mv ./usercert.pem /etc/grid-security/hostcert.pem > mv ./userkey.pem /etc/grid-security/hostkey.pem > chmod 444 /etc/grid-security/hostcert.pem > chmod 400 /etc/grid-security/hostkey.pemThe following command will verify the certificate is readable. The output shown will be similar, but specific to your request.
> openssl x509 -text -noout -in /etc/grid-security/hostcert.pem
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 665 (0x299)
Signature Algorithm: sha1WithRSAEncryption
Issuer: DC=org, DC=DOEGrids, OU=Certificate Authorities, CN=DOEGrids CA 1
Validity
Not Before: Dec 13 23:55:14 2005 GMT
Not After : Dec 13 23:55:14 2006 GMT
Subject: DC=org, DC=doegrids, OU=Services, CN=goofey.looney.tunes
.........
.
$ source $VDT_LOCATION/setup.sh $ cert-request -ou s -dir . -host hostname.domain.tld -service ldapAnd follow the instructions, which are very similar to the host cert instructions, except that you are creating ldapkey.pem and ldapcert.pem.
$VDT_LOCATION/vdt/setup/configure_globus_gatekeeper script.
Review the configuration files /etc/xinetd.d/globus-gatekeeper and
/etc/xinetd.d/gsiftp (or in /etc/inetd.conf) that were created during the pacman installation.
Additionally, /etc/services file was updated with the following entries:
gsiftp 2811/tcp # Added by the VDT globus-gatekeeper 2119/tcp # Added by the VDTIf you are satisfied with this configuration, restart the xinetd (or inetd) daemon to pick up the configuration changes:
/etc/rc.d/init.d/xinetd restart Stopping xinetd: [ OK ] Starting xinetd: [ OK ]To verify that the gatekeeper is running at this point, you should be able to telnet to the public IP address of your site on port 2119 and get a response.
telnet _hostname port_
It should return Connected to.... The same should be true of the gsiftp port (2811 by default).
If you asked for MonALISA to be activated, an /etc/init.d/MLD will be added to your systems rc.d services and started.
Usage: ./configure-osg.sh Executes the script in question and answer mode ./configure-osg.sh --help Steps through the set of information to be collected ./configure-osg.sh --display Displays the current osg-attributes.conf attributes
> source VDT_LOCATION/setup.(c)sh
> grid-proxy-init
(you will be prompted for your GRID pass phrase)
Then, to get the subject (DN) of your proxy, run:
> grid-proxy-info -subject Output.... /DC=gov/DC=fnal/O=Fermilab/OU=People/CN=Dane Skow/UID=daneTake the subject string and pre-pend it to the /etc/grid-security/grid-mapfile and assign it to a local user account (you can use any of the VO accounts you've created at the beginning to test). So the grid-mapfile should have at least one entry like:
> cat /etc/grid-security/grid-mapfile "/DC=gov/DC=fnal/O=Fermilab/OU=People/CN=Dane Skow/UID=dane" usatlas1This is enough to enable the rest of the installation testing.
> globus-job-run $(hostname):2119/jobmanager-fork /usr/bin/id
Output....
uid=9872(usatlas1) gid=9872(usatlas1) groups=9872(usatlas1)
You should see the UNIX user account assigned based on the gridmap-file you created previously..
You can also view the $VDT_LOCATION/globus/var/globus-gatekeeper.log file to see the authorization messages:
TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 6: Got connection 131.225.207.100 at Sun Jan 8 23:12:43 2006 TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 5: Authenticated globus user: /DC=gov/DC=fnal/O=Fermilab/OU=People/CN=Dane Skow/UID=dane TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 0: GRID_SECURITY_HTTP_BODY_FD=7 TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 5: Requested service: jobmanager-fork TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 5: Authorized as local user: usatlas1 TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 5: Authorized as local uid: 9872 TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 5: and local gid: 9872 TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 0: executing /storage/local/data1/osg/globus/libexec/globus-job-manager TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 0: GATEKEEPER_JM_ID 2006-01-08.23:12:43.0000029703.0000000000 for /DC=gov/DC=fnal/O=Fermilab/OU=People/CN=Dane Skow/UID=dane on 131.225.207.100 TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 0: GRID_SECURITY_CONTEXT_FD=10 TIME: Sun Jan 8 23:12:43 2006 PID: 29703 -- Notice: 0: Child 29808 started
> globus-job-run $(hostname):2119/jobmanager-condor /usr/bin/id
Output....
uid=9872(usatlas1) gid=9872(usatlas1) groups=9872(usatlas1)
You should see the same results as with the fork queue submission.
The globus-gatekeeper.log file will look identical except for the "Request service" message:
PID: 29703 -- Notice: 5: Requested service: jobmanager-condor
$ echo "My test gsiftp file" > /tmp/gsiftp.testCopy the file to the $OSG_DATA directory:
> source $VDT_LOCATION/monitoring/osg-attributes.conf (This is simply to get the OSG_DATA variable)
$ globus-url-copy file:/tmp/gsiftp.test gsiftp://$(hostname)${OSG_DATA}/gsiftp.test
Verify the file was copied to the $OSG_DATA directory:
> ls -l $OSG_DATA/gsiftp.test
-rw-r--r-- 1 usatlas1 usatlas1 20 Jan 9 13:29 /storage/local/data1/osg/OSG.DIRS/data/gsiftp.test
To verify the accounting information was captured, you can view the $VDT_LOCATION/globus/var/log/gridftp.log for the ftp copy operation you just performed. You should see an entry similar to this for the transfer statistics collected by various systems like MonALISA and ACDC::
DATE=20060110211714.329666 HOST=cmssrv09.fnal.gov PROG=globus-gridftp-server NL.EVNT=FTP_INFO START=20060110211714.247555 USER=ivdgl FILE=/tmp/gridcat-gsiftp-test.gridcat.21602.remote BUFFER=0 BLOCK=262144 NBYTES=28 VOLUME=/ STREAMS=1 STRIPES=1 DEST=[129.79.4.64] TYPE=RETR CODE=226The authorization and other informational messages are captured in the $VDT_LOCATION/globus/var/log/gridftp-auth.log:
[2834] Tue Jan 10 15:21:37 2006 :: Server started in inetd mode. [2834] Tue Jan 10 15:21:37 2006 :: Configuration read from /storage/local/data1/osg/globus/etc/gridftp.conf [2834] Tue Jan 10 15:21:37 2006 :: New connection from: cmssrv09.fnal.gov:38376 [2834] Tue Jan 10 15:21:38 2006 :: User uscms112 successfully authorized [2834] Tue Jan 10 15:21:38 2006 :: Starting to transfer "/storage/local/data1/osg/OSG.DIRS/app/weigand.jgw.sh.2707". [2834] Tue Jan 10 15:21:38 2006 :: Finished transferring "/storage/local/data1/osg/OSG.DIRS/app/weigand.jgw.sh.2707". [2834] Tue Jan 10 15:21:38 2006 :: Closed connection from cmssrv09.fnal.gov:38376
$ cd $VDT_LOCATION
$ SOURCE ./setup.sh
$ $VDT_LOCATION/MIS-CI/configure-misci.sh --choose_user
Editing site configuration...
Creating MIS-CI.db
:
( a lot of information on the tables it is creating will appear before any questions are asked)
:
Would you like to set up MIS-CI cron now? (y/n) y
At what frequency (in minutes) would you like to run MIS-CI ? [10] 10
Under which account the cron should run ? [ivdgl] mis
Frequency 10
User mis
Would you like to create MIS-CI crontab ? (y/n) y
Updating crontab
Configuring MIS jobmanager
/storage/local/data1/osg/MIS-CI/share/misci/globus/jobmanager-mis is created
Your site configuration :
sitename ITB_INSTALL_TEST
dollarapp /storage/local/data1/osg/OSG.DIRS/app
dollardat /storage/local/data1/osg/OSG.DIRS/data
dollartmp /storage/local/data1/osg/OSG.DIRS/data
dollarwnt /storage/local/data1/osg/OSG.DIRS/wn_tmp
dollargrd /storage/local/data1/osg
batcheS condor
vouserS uscms01 ivdgl sdss usatlas1 cdf grase fmri gadu
End of your site configuration
If you would like to add more vo users,
you should edit /storage/local/data1/osg/MIS-CI/etc/misci/mis-ci-site-info.cfg.
You have additional batch managers : condor .
If you would like to add these,
you should edit /storage/local/data1/osg/MIS-CI/etc/misci/mis-ci-site-info.cfg.
configure--misci Done
Please read /storage/local/data1/osg/MIS-CI/README
MIS-CI is collecting information using crontab as the user mis (or ivdgl if you left it as the default). Therefore, in order to stop
MIS-CI processes, crontab should be removed. The script $VDT_LOCATION/MIS-CI/uninstall-misci.sh
is provided for this purpose:
> cd $VDT_LOCATION
> source setup.(c)sh
> cd MIS-CI
> ./uninstall-misci.sh
After finishing configuring the MIS-CI, a few checks might be necessary:
> crontab -u mis -l
> $VDT_LOCATION/MIS-CI/sbin/run-mis-ci.sh
> source $VDT_LOCATION/.setup.(c)sh
> grid-proxy-init
(enter your password)
> globus-job-run <hostname>/jobmanager-mis /bin/sh siteinfo
(Here <hostname> is the CE hostname.)
...... sample output ....
id 1
ymdt Wed Jan 11 19:00:01 UTC 2006
sitename ITB_INSTALL_TEST
hostname localhost
VOname local:100
appdir /storage/local/data1/osg/OSG.DIRS/app
datadir /storage/local/data1/osg/OSG.DIRS/data
tmpdir /storage/local/data1/osg/OSG.DIRS/data
wntmpdir /storage/local/data1/osg/OSG.DIRS/wn_tmp
grid3dir /storage/local/data1/osg
jobcon condor
utilcon fork
locpname1
locpname2
ncpurunning 0
ncpus 4
> ps -efwww |grep ldap
daemon 7584 1 0 15:25 ? 00:00:00 /bin/sh /storage/local/data1/osg/globus/sbin/grid-info-soft-register
-log /storage/local/data1/osg/globus/var/grid-info-system.log
-f /storage/local/data1/osg/globus/etc/grid-info-resource-register.conf
-- /storage/local/data1/osg/globus/libexec/grid-info-slapd
-h ldap://0.0.0.0:2135 -d 0
-f /storage/local/data1/osg/globus/etc/grid-info-slapd.conf
daemon 7627 7584 1 15:25 ? 00:00:00 /storage/local/data1/osg/globus/libexec/slapd
-h ldap://0.0.0.0:2135 -d 0 -f /storage/local/data1/osg/globus/etc/grid-info-slapd.conf
daemon 7639 1 0 15:25 ? 00:00:00 /bin/sh /storage/local/data1/osg/globus/sbin/grid-info-soft-register
-log /storage/local/data1/osg/globus/var/grid-info-system.log -register -t mdsreg2
-h cmssrv09.fnal.gov -p 2135 -period 600
-dn Mds-Vo-Op-name=register, Mds-Vo-name=ITB_INSTALL_TEST, o=grid -daemon -t ldap
-h cmssrv09.fnal.gov -p 2135 -ttl 1200 -r Mds-Vo-name=local, o=grid -T 20 -b ANONYM-ONLY
-z 0 -m cachedump -period 30
If it is not running, you will need to restart it:
Usage: > /etc/init.d/gris [start | stop ]MDS should be configured for anonymous bind. You can send a test query to your local host which will perform no authentication on the user submitting the request . First, verify you have no proxy certificate (/tmp/x509u_(your_PID)). If one exists, remove it first. Then,
> source $VDT_LOCATION/setup.sh > grid-info-search -anonymous ... your screen should scroll for a while showing a lot of data... ....you can redirect the output to validate
export grid_site_state_bit = 1NOTE: It might take up to 2 hours for registered sites to take effect in the GridCat display. If your site is not registered with the OSG-GOC see the instructions in the OSG Registration section of this document. Until your site is registered, it will not appear in GridCat If your site decides to become inactive for various reasons, e.g., site maintenance, the site administrator should set the value of grid_site_state_bit to be other than 1. Example grid-site-state-info file.
> cd $VDT_LOCATION
> source ./setup.sh
> grid-proxy-init
....enter your passphrase
> cd verify
> ./site_verify.pl
The results will indicate the various tests that are performed with results indicating FAILED, UNTESTED, NOT WORKING, NONE or NO. conditions.
GLOBUS_TCP_PORT_RANGE=beginport,endport). It should span at least 100 ports for a small site.
$VDT_LOCATION/MonaLisa/Service/VDTFarm/ml.properties
GLOBUS_TCP_SOURCE_RANGE=beginport,endport).
GLOBUS_TCP_PORT_RANGE=beginport,endport for inbound
ports. If you restrict outbound connections, you will also need to set GLOBUS_TCP_SOURCE_RANGE=beginport,endport. These may be set either in $VDT_LOCATION/vdt/etc/vdt-local-setup.sh=, or in the xinetd configuration files -- the examples below
use xinetd. The variables will be used by GRAM,
GridFTP, and any clients that require them.
The above ports and protocols must be open to and from all grid clients and server machines participating in the grid in order to provide minimal functionality.
In addition to the above, port 9443 must be open for both incoming and outgoing connections in order to test the web-services capabilities of the most recent versions of the VDT.
You also may need to open the following optional incoming ports for additional Grid services. Note that unlike the ones listed above, the following are optional and aare only needed if you are running those specific services or if required by your specific virtual organization.
> cat /etc/hosts.allow # # hosts.allow This file describes the names of the hosts which are # allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # sshd: 129.79.6.113 ALL : localhost vdt-run-gsiftp.sh : ALL vdt-run-globus-gatekeeper.sh : ALLFor RH9, RHEL3 or compatible iptables systems The default firewall configuration for Red Hat's iptables sets the system up with a stateful packet filter. This is different than some legacy RH7 systems as by default no ports that are not explicitly opened by the iptables script will be open. This includes high numbered ports that are often used by grid services. If your preference is to leave as much of the stateful packet filtering in place but enable just those grid services you want to deploy then you can use the following instructions. Two changes need to be made to an OSG gateway with a host based iptables stateful firewall. First is the configuration of the firewall itself. On RHEL or similar systems this is done in /etc/sysconfig/iptables The Chain RH-Firewall-1-INPUT is a default chain for RHEL3. This chain is also sometimes called INPUT. Make sure the following rules use the chain that other rules in /etc/sysconfig/iptables do. Note: For GSISSH this port is often already open for systems. You can use either this rule or the default rule setup at install time if you selected custom firewall and enabled ssh.
# Globus: Requires addition configuration in /etc/xinetd.d/globus-gatekeeper # set: env = GLOBUS_TCP_PORT_RANGE=40000,50000 # This allows up to 10,000 ports and matches the globus config. # How globus is configured to use these ports is subject to change in an upcoming # release -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 40000:50000 -j ACCEPT # Monalisa, grabs 3 ports from the following range -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 9000:9010 -j ACCEPT # Gridftp -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 2811 -j ACCEPT # MDS -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 2135 -j ACCEPT # GRAM -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 2119 -j ACCEPT # Optional Services # RLS Server -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 39281 -j ACCEPT # MyProxy -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 7512 -j ACCEPT # GSISSH/SSH -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 22 -j ACCEPT # GIIS -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 2136 -j ACCEPT # GUMS/VOMS -A RH-Firewall-1-INPUT -m state --state NEW -p tcp -m tcp --dport 8443 -j ACCEPTSecond, we configure Globus to use the allowed inbound port range. /etc/xinetd.d/globus-gatekeeper
service globus-gatekeeper
{
socket_type = stream
protocol = tcp
wait = no
user = root
instances = UNLIMITED
cps = 400 10
server = $VDT_LOCATION/vdt/sbin/vdt-run-globus-gatekeeper.sh
env = GLOBUS_TCP_PORT_RANGE=40000,50000
disable = no
}
If you restrict outbound connections (to the same port range, for example), you should also modify the gsiftp config file.
/etc/xinetd.d/globus-gsiftp
service gsiftp
{
socket_type = stream
protocol = tcp
wait = no
user = root
instances = UNLIMITED
cps = 400 10
server = $VDT_LOCATION/vdt/sbin/vdt-run-gsiftp.sh
env += GLOBUS_TCP_SOURCE_RANGE=40000,50000
disable = no
}
Finally, add the port range(s) to
$VDT_LOCATION/globus/etc/globus-job-manager.conf to ensure that it is picked up by other services by
adding the following lines (omit the globus-tcp-source-range line if you do not restrict outbound connections):
-globus-tcp-port-range 40000,50000
-globus-tcp-source-range 40000,50000
Note: $VDT_LOCATION should be set by the pacman installer
If you limit the globus-related port range to certain values, it may be necessary to adjust the Linux ephemeral port range to avoid these values.
If this has not already been done, check the /etc/sysctl.conf for the following lines and insert if needed:
# Limit ephemeral ports to avoid globus tcp port range # See OSG CE install guide net.ipv4.ip_local_port_range = 10240 39999Save and exit if edited. Then, if you changed it, apply the changes by doing:
sysctl -p
After editing the above files run the following commands
# /etc/rc.d/init.d/iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] # /etc/rc.d/init.d/xinetd reload Reloading configuration: [ OK ]
vdt-untar is untarring prima-0.3.x86_rh_9.tar.gz gzip: stdin: not in gzip format
Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.