Solaris 10 Setup

From Tom
Jump to: navigation, search

This documentation can be redistributed and/or modified under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version.

Unless required by applicable law, this documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

This documentation should not be used as a replacement for a valid Oracle service contract and/or an Oracle service engagement. Failure to follow Oracle guidelines for installation and/or maintenance could result in service/warranty issues with Oracle.

Use of this documentation is at your own risk!

--Tom Stevenson (talk) 17:11, 26 May 2015 (EDT)


Banner 8 setups			 (Still a work in progress)
T5440 Setup			 (Still a work in progress) 
M5000 Setup			 (Still a work in progress) 
Solaris 10 Setup		 (Still a work in progress) 
Fair Share Scheduler		 (Still a work in progress) 
Resource Pools			 (Still a work in progress) 
Solaris Cluster 3.2		 (Still a work in progress) 
Solaris Zones			 (Still a work in progress) 
Patching Cluster with HA-Zones	 (Still a work in progress) 

Setting up the OS

Although only one server is used in the following examples, all of the following steps much be executed on all servers within the cluster.

Setting up MPXio for the NetApp SAN

Execute the following command to setup the NetApp for MPXio:

[root@banpapp2 ~]# /opt/NTAP/SANToolkit/bin/basic_config -ssd_set

which added the following lines to the end of /kernel/drv/ssd.conf:

ssd-config-list="NETAPP  LUN", "netapp-ssd-config";

Setting up stmsboot

To enable Solaris I/O Multipathing on multipath-capable Fibre channel controller ports, enter:

[root@banpapp2 ~]# stmsboot -D fp -e

Setting up the /etc/inet/hosts file

Populate the /etc/inet/hosts file with all known hosts in cluster.

::1			localhost		localhost		loghost		nis3		nis4		jumpstart1		   jumpstart		servervantage
# banpapp Global
#		banpapp1		   banpapp1-1		banpapp1-2		banpapp1-test1		banpapp1-test2
#		banpapp2		   banpapp2-1		banpapp2-2		banpapp2-test1		banpapp2-test2
#		banpapp3		   banpapp3-1		banpapp3-2		banpapp3-test1		banpapp3-test2
# bantapp Global
#		bantapp1	   bantapp1-1		bantapp1-2		bantapp1-test1		bantapp1-test2
#		bantapp2	   bantapp2-1		bantapp2-2		bantapp2-test1		bantapp2-test2
#		bantapp3	   bantapp3-1		bantapp3-2		bantapp3-test1		bantapp3-test2
# banpdb Global
#		banpdb1 banpdb1-1		banpdb1-2		banpdb1-test1		banpdb1-test2
#		banpdb2 banpdb2-1		banpdb2-2		banpdb2-test1		banpdb2-test2
# bantdb Global
#		bantdb1 bantdb1-1		bantdb1-2		bantdb1-test1		bantdb1-test2
#		bantdb2 bantdb2-1		bantdb2-2		bantdb2-test1		bantdb2-test2
# Zones banpapp
#		edi			   edi1		banpcold1		banpinb1		   banpinb3		banpinb2		   banpinb4 
#		banpsch1		   banpsch2		banpweb1		   banpweb2		cognospapp1		   cognospapp3		cognospweb1		   cognospweb3	lumpapp1			   lumprod3		banpssb1		banpssb2		wsupemgc1		   wsupemgc2		cognospweb2		   cognospweb4		workflowp1
# Zones bantapp
#		bantinb1	   bantinb3		bantssb1		bantssb2		editest1		bantinb2	   bantinb4		cognostapp1	   cognostapp3		cognostweb1	   cognostweb3		cognostweb2	   cognostweb4		workflowt1	   lumtapp1	   lumtapp2	   lumtapp3	   lumdapp1	   lumdapp2	   lumdapp3
# Zones banpdb
#		banpdbs1 banpdbs2		banpmrt1 banpmrt2
# Zones bantdb
#		bantdbs1 bantdbs2		bantmrt1 bantmrt2
# LHN banpdb
#		odsprod		lump		wsu1		rct1		rct2		sprd		odsclone		trg6		trg7		devl		trod		spacep		pprd7		odsmrt1		c8pp		vmdb1		devl8		pprd8		trng8		trng8s		lum4d		lum4t		trod8		lum4p		crnprod
# LHN bantdb
#		sdbx		trng		trg6		trg7		sdev		devl		ecs1		ecs2		trod		spacet		c8pp		qa01		adastra		odsban8		schc		scht		schp
# Netbackup
#		netbackup1-bk0		netbackup1-bk		netbackup1-bk8		netbackup1-bk68
# Netbackup banpapp Global
#		banpapp1-1-bk	   banpapp1-bk		banpapp1-2-bk		banpapp1-test1-bk		banpapp1-test2-bk
#		banpapp2-1-bk	   banpapp2-bk		banpapp2-2-bk		banpapp2-test1-bk		banpapp2-test2-bk
#		banpapp3-1-bk	   banpapp3-bk		banpapp3-2-bk		banpapp3-test1-bk		banpapp3-test2-bk
# Netbackup bantapp Global
#		bantapp1-1-bk		bantapp1-2-bk		bantapp1-test1-bk		bantapp1-test2-bk
#		bantapp2-1-bk		bantapp2-2-bk		bantapp2-test1-bk		bantapp2-test2-bk
#		bantapp3-1-bk		bantapp3-2-bk		bantapp3-test1-bk		bantapp3-test2-bk
# Netbackup banpdb Global
#		banpdb1-1-bk		banpdb1-2-bk		banpdb1-test1-bk		banpdb1-test2-bk
#		banpdb2-1-bk		banpdb2-2-bk		banpdb2-test1-bk		banpdb2-test2-bk
# Netbackup bantdb Global
#		bantdb1-1-bk		bantdb1-2-bk		bantdb1-test1-bk		bantdb1-test2-bk
#		bantdb2-1-bk		bantdb2-2-bk		bantdb2-test1-bk		bantdb2-test2-bk
# Netbackup banpapp zones
#		edi-bk		   edi1-bk		banpcold1-bk		banpinb1-bk 	   banpinb3-bk		banpinb2-bk 	   banpinb4-bk		banpsch1-bk	   banpsch2-bk		cognospapp1-bk    cognospapp3-bk		cognospweb1-bk    cognospweb3-bk		lumpapp1-bk	   lumprod3-bk		banpssb1-bk		banpssb2-bk		wsupemgc1-bk 	   wsupemgc2-bk		cognospweb2-bk    cognospweb4-bk		workflowp1-bk
# Netbackup bantapp zones
#		lumtapp1-bk		lumtapp2-bk		lumtapp3-bk		lumdapp1-bk		lumdapp2-bk		lumdapp3-bk		bantinb1-bk       bantinb3-bk		bantssb1-bk		bantssb2-bk		editest1-bk		bantinb2-bk       bantinb4-bk		cognostapp1-bk    cognostapp3-bk		cognostweb1-bk    cognostweb3-bk		cognostweb2-bk    cognostweb4-bk		workflowt1-bk
# Netbackup banpdb zones
#		banpdbs1-bk68 	   banpdbs2-bk68		banpmrt1-bk68 	   banpmrt2-bk68
# Netbackup bantdb zones
#		bantdbs1-bk68	   bantdbs2-bk68		bantmrt1-bk68	   bantmrt2-bk68

Setting up IPMP

Setup IPMP for the public and netbackup interfaces by executing the following commands (once per server). The following are just examples. Use the correct hostname and NICs for the given server being configured.

[root@banpapp1 ~]# cluster_configure_ipmp public_ipmp banpapp1@nxge7 banpapp1-2@nxge11
[root@banpapp2 ~]# cluster_configure_ipmp public_ipmp banpapp2@nxge7 banpapp2-2@nxge11
[root@banpapp3 ~]# cluster_configure_ipmp public_ipmp banpapp3@nxge7 banpapp3-2@nxge11

and for Netbackup:

[root@banpapp1 ~]# cluster_configure_ipmp netbackup_ipmp banpapp1-1-bk@nxge4 banpapp1-2-bk@nxge8
[root@banpapp2 ~]# cluster_configure_ipmp netbackup_ipmp banpapp2-1-bk@nxge4 banpapp2-2-bk@nxge8
[root@banpapp3 ~]# cluster_configure_ipmp netbackup_ipmp banpapp3-1-bk@nxge4 banpapp3-2-bk@nxge8

Setting up SVM

Copy VTOC to second boot disk

Because the jumpstart process plays games with where root and swap are placed on the boot disk, it is very important that the second boot disk be setup using the prtvtoc and fmthard commands BEFORE the meta devices are created.

[root@banpapp2 ~]# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

Setup the root file systems using the following SVM commands:

Setup metadb

Verify that metadb is not already setup:

[root@banpapp2 ~]# metadb -i
metadb: banpapp3: there are no existing databases

Setup the metadb:

[root@banpapp2 ~]# metadb -f -a -c 6 c0t0d0s7 c0t1d0s7
[root@banpapp2 ~]# metadb -i
       flags           first blk       block count
    a        u         16              8192            /dev/dsk/c0t0d0s7
    a        u         8208            8192            /dev/dsk/c0t0d0s7
    a        u         16400           8192            /dev/dsk/c0t0d0s7
    a        u         24592           8192            /dev/dsk/c0t0d0s7
    a        u         32784           8192            /dev/dsk/c0t0d0s7
    a        u         40976           8192            /dev/dsk/c0t0d0s7
    a        u         16              8192            /dev/dsk/c0t1d0s7
    a        u         8208            8192            /dev/dsk/c0t1d0s7
    a        u         16400           8192            /dev/dsk/c0t1d0s7
    a        u         24592           8192            /dev/dsk/c0t1d0s7
    a        u         32784           8192            /dev/dsk/c0t1d0s7
    a        u         40976           8192            /dev/dsk/c0t1d0s7
r - replica does not have device relocation information
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors

Setup SVM

Setup the root, var and swap SVM file systems:

[root@banpapp2 ~]# metainit -f d10 1 1 c0t0d0s0
[root@banpapp2 ~]# metainit -f d11 1 1 c0t0d0s1
[root@banpapp2 ~]# metainit -f d13 1 1 c0t0d0s3

[root@banpapp2 ~]# metainit d0 -m d10
[root@banpapp2 ~]# metainit d1 -m d11
[root@banpapp2 ~]# metainit d3 -m d13

Setup the global file system. Use the node ID as the first digit.

[root@banpapp1 ~]# metainit -f d116 1 1 c0t0d0s6
[root@banpapp1 ~]# metainit d106 -m d116 

[root@banpapp2 ~]# metainit -f d216 1 1 c0t0d0s6
[root@banpapp2 ~]# metainit d206 -m d216 

[root@banpapp3 ~]# metainit -f d316 1 1 c0t0d0s6
[root@banpapp3 ~]# metainit d306 -m d316 

Setup the /etc/vfstab for the SVN device names:

[root@banpapp2 ~]# metaroot d0
[root@banpapp2 ~]# /net/jumpstart/export/san/bin/svm_update_vfstab

Fix the device name in /etc/vfstab for the global file system by changing d6 into one of d106, d206, or d306 (based on what will become the node ID in Solaris Cluster 3.2).

Reboot the server.

Define the second submirror:

[root@banpapp2 ~]# metainit -f d20 1 1 c0t1d0s0
[root@banpapp2 ~]# metainit -f d21 1 1 c0t1d0s1
[root@banpapp2 ~]# metainit -f d23 1 1 c0t1d0s3

Attached the submirrors to the mirrors:

[root@banpapp2 ~]# metattach d0 d20
[root@banpapp2 ~]# metattach d1 d21
[root@banpapp2 ~]# metattach d3 d23

Setup and attach the second submirror for the global file system. Use the node ID as the first digit:

[root@banpapp1 ~]# metainit -f d126 1 1 c0t1d0s6
[root@banpapp1 ~]# metattach d106 d126

[root@banpapp2 ~]# metainit -f d226 1 1 c0t1d0s6
[root@banpapp2 ~]# metattach d206 d226

[root@banpapp3 ~]# metainit -f d326 1 1 c0t1d0s6
[root@banpapp3 ~]# metattach d306 d326

Setting up the Runtime Linking Environment

The runtime linking environment defines which libraries can be used by system daemons.

Verify the runtime linking environment

First verify that the runtime linking environment has been setup or not using the /usr/bin/crle command (configure runtime linking environment). This example shows the output for a system which has NOT been configured. In this example the file /var/ld/ld.config is not found, and the Default Library Path (ELF) is set to the system default of /lib:/usr/lib.

[root@banpapp1 ~]# /usr/bin/crle

Default configuration file (/var/ld/ld.config) not found
  Default Library Path (ELF):   /lib:/usr/lib  (system default)
  Trusted Directories (ELF):    /lib/secure:/usr/lib/secure  (system default)

Setup the runtime linking environment

If the runtime linking environment has not yet been setup (see the output above), use the following command (configure runtime linking environment) to configure it:

[root@banpssb2 ~]# /usr/bin/crle -c /var/ld/ld.config -l /lib:/usr/lib:/opt/local/lib

Next, verify that the runtime linking environment has been setup. The output should show that file /var/ld/ld.config is being used, and that the Default Library Path (ELF) is no longer the system default, but has been replaced with /lib:/usr/lib:/opt/local/lib. This path includes our additional libraries from /opt/local/lib. The output also list the command that was used to reconfigure the runtime linking environment, which should be crle -c /var/ld/ld.config -l /lib:/usr/lib:/opt/local/lib:

[root@banpssb2 ~]# /usr/bin/crle

Configuration file [version 4]: /var/ld/ld.config
  Default Library Path (ELF):   /lib:/usr/lib:/opt/local/lib
  Trusted Directories (ELF):    /lib/secure:/usr/lib/secure  (system default)

Command line:
  crle -c /var/ld/ld.config -l /lib:/usr/lib:/opt/local/lib

Setup the sysdata storage pool and file systems

Place holder for this step. Needs to be filled in.

Setup the sysdata storage pool

Place holder for this step. Needs to be filled in.

Setup the sysdata file system

Place holder for this step. Needs to be filled in.

Setup the default third party system packages

Before proceeding with this step, make sure that all of the steps in Setting up the Runtime Linking Environment and Setup the sysdata storage pool and file systems have been processed first. Failing to do so could result packages failing to install or in daemons installed during this step failing to run.

There are several third party system packages which are installed on all Solaris 10 servers (both in the global and local zones). The binaries for these packages are installed in a ZFS file system mounted as /opt/local, and their configuration and log files, as necessary, are installed in a ZFS file system mounted as /var/local.

The steps to create the ZFS pool and there two ZFS file systems will be added at a later time. The following steps assume that they have already been created.

Install the default third party system packages

Execute the following command in both the global and local zones to install and configure the third party system packages:

[root@banpapp1 ~]# install-pkgs

--Tom Stevenson (talk) 13:49, 16 April 2013 (EDT)

Help contents:

Reading: Go | Search | URL | Namespace | Page name | Section | Link | Backlinks | Piped link | Interwiki link | Redirect | Variable | Category | Special page
Tracking changes: Recent | (enhanced) | Related | Watching pages | Page history | Diff | User contributions | Edit summary | Minor edit | Patrolled edit
Logging in and preferences: Logging in | Preferences | User style
Editing: Overview | Wikitext | New page | List | Images/files | Image page | Special characters | Formula | Table | EasyTimeline | Inputbox | Template | (p. 2) | Renaming (moving) a page | Editing shortcuts | Talk page | Testing | Export | Import | rlc |