Solaris Cluster 3.2

From Tom
Jump to: navigation, search

This documentation can be redistributed and/or modified under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version.

Unless required by applicable law, this documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

This documentation should not be used as a replacement for a valid Oracle service contract and/or an Oracle service engagement. Failure to follow Oracle guidelines for installation and/or maintenance could result in service/warranty issues with Oracle.

Use of this documentation is at your own risk!

--Tom Stevenson (talk) 17:11, 26 May 2015 (EDT)


Contents

Index

Banner 8 setups			 (Still a work in progress)
T5440 Setup			 (Still a work in progress) 
M5000 Setup			 (Still a work in progress) 
Solaris 10 Setup		 (Still a work in progress) 
Fair Share Scheduler		 (Still a work in progress) 
Resource Pools			 (Still a work in progress) 
Solaris Cluster 3.2		 (Still a work in progress) 
Solaris Zones			 (Still a work in progress) 
Patching Cluster with HA-Zones	 (Still a work in progress) 

Setting up Solaris Cluster 3.2

Although only one server is used in most of the following examples, unless otherwise noted, all of the following steps much be executed on all servers within the cluster.

Setting up the /etc/hosts.allow file

Populate the /etc/hosts.allow file for all known hosts in cluster. This example uses the nodes for the banpapp cluster. Replace the node names in this example with the nodes from the cluster being implemented.

#
# Copyright (c) 2002 by Sun Microsystems, Inc.
# All rights reserved.
#
#ident  "@(#)hosts.allow-suncluster 1.2     05/03/22     SMI"
#
# This file is supplied as part of the Solaris Security Toolkit and
# is used to grant access to specific services as part of the Solaris 9
# TCP Wrappers implementation.  This file should be customized based
# on individual site needs.
#

# This machine must allow cluster access to all services
# ALL: <other cluster members> localhost
#

ALL:    localhost banpapp1 banpapp1-2 banpapp2 banpapp2-2 banpapp3 banpapp3-2 banpapp1-cluster banpapp2-cluster banpapp3-cluster
sshd:   ALL

#
# rcpbind does NOT lookup hostnames, so only IP addresses can be used
# to allow or deny access to external servers.  This does NOT use the 
# netmask, so include ALL address ranges (for example, include both 
# 141.217.0. and 141.217.1., not just 141.217.0.)
#

rpcbind:        141.217.0. 141.217.1. 141.217.69. 141.217.68. 172.16. 172.16.0. 172.16.1. 172.16.4.

#
# Netbackup ports
#

bpcd:           netbackup1-bk
bpjava-msvc:    netbackup1-bk
vnetd:          netbackup1-bk
vopied:         netbackup1-bk

Installing Solaris Cluster 3.2

Solaris Cluster 3.2 Installer

Run the installer from a X11 compatible server (a UNIX server or a Windows server running X11 software). Install Solaris Cluster 3.2 by executing the following commands on all nodes that will be part of the cluster:

[root@banpapp2 ~]#  cd /net/jumpstart/export/cluster/Cluster_3.2/Solaris_sparc/
[root@banpapp2 Solaris_sparc]# ./installer

Sun Java Availability Suite Install Wizard

This will start a X100 GUI called "Sun Java(tm) Availability Suite Install Wizard". Click through each screen of the Wizard until the "Choose Software Components" screen is reached.

DO NOT CLICK Next UNTIL ALL REQUIRED OPTIONS FROM THIS SCREEN HAVE BEEN SELECTED.

At this screen select all of the following:

Sun Cluster 3.2

Under "Sun Cluster Agents 3.2", select the appropriate agents for the cluster. The following are the recommended minimum subset of agents that should be installed:

Sun Cluster HA for NFS
Sun Cluster HA for Oracle
Sun Cluster HA for Samba
Sun Cluster HA for Solaris Containers

Under "Shared Services", select:

All Shared Components

Once all of the above have been selected (and any others that might be needed), click Next.

Click through each screen until the "Choose a Configuration Type" screen is reached. At this screen pick "Configure Later", and click Next.

The next screen is labeled "Ready to Install". Click Install, and wait for the installation to complete.

Patch the the Sun Cluster 3.2 and cacao software

Using any method you wish (I use smpatch), patch all of the Sun Cluster 3.2 and cacao software for all nodes that will be part of the cluster.

Configure Solaris Cluster 3.2

Environmental Variables

Execute the following on all servers that will be part of a cluster to make sure all environmental variables are setup for Solaris Cluster 3.2:

[root@banpapp2 ~]# . /etc/profile && . ~/.profile

Configuring Node 1 of the cluster

Start with the server that will be node 1 (These commands will NOT be executed on the other servers of the cluster!), and execute the following:

[root@banpapp1 ~]# scinstall
Main Menu
 *** Main Menu ***

   Please select from one of the following (*) options:

     * 1) Create a new cluster or add a cluster node
       2) Configure a cluster to be JumpStarted from this install server
       3) Manage a dual-partition upgrade
       4) Upgrade this cluster node
       5) Print release information for this cluster node

     * ?) Help with menu options
     * q) Quit

   Option: 1
New Cluster and Cluster Node Menu
 *** New Cluster and Cluster Node Menu ***

   Please select from any one of the following options:

       1) Create a new cluster
       2) Create just the first node of a new cluster on this machine
       3) Add this machine as a node in an existing cluster

       ?) Help with menu options
       q) Return to the Main Menu

   Option: 2
Establish Just the First Node of a New Cluster
 *** Establish Just the First Node of a New Cluster ***

   This option is used to establish a new cluster using this machine as
   the first node in that cluster.

   Before you select this option, the Sun Cluster framework software
   must already be installed. Use the Java Enterprise System (JES)
   installer to install Sun Cluster software.

   Press Control-d at any time to return to the Main Menu.

   Do you want to continue (yes/no) [yes]? yes
Typical or Custom Mode
 >>> Typical or Custom Mode <<<

   This tool supports two modes of operation, Typical mode and Custom.
   For most clusters, you can use Typical mode. However, you might need
   to select the Custom mode option if not all of the Typical defaults
   can be applied to your cluster.

   For more information about the differences between Typical and Custom
   modes, select the Help option from the menu.

   Please select from one of the following options:

       1) Typical
       2) Custom

       ?) Help
       q) Return to the Main Menu

   Option [1]: 2
Cluster Name
 >>> Cluster Name <<<

   Each cluster has a name assigned to it. The name can be made up of
   any characters other than whitespace. Each cluster name should be
   unique within the namespace of your enterprise.

   What is the name of the cluster you want to establish?  banpapp
Check
 >>> Check <<<

   This step allows you to run sccheck(1M) to verify that certain basic
   hardware and software pre-configuration requirements have been met.
   If sccheck(1M) detects potential problems with configuring this
   machine as a cluster node, a report of failed checks is prepared and
   available for display on the screen. Data gathering and report
   generation can take several minutes, depending on system
   configuration.

   Do you want to run sccheck (yes/no) [yes]?  no
Cluster Nodes
 >>> Cluster Nodes <<<

   This Sun Cluster release supports a total of up to 16 nodes.

   Please list the names of the other nodes planned for the initial
   cluster configuration. List one node name per line. When finished,
   type Control-D:

   Node name (Control-D to finish):  banpapp2
   Node name (Control-D to finish):  banpapp3
   Node name (Control-D to finish):  ^D

   This is the complete list of nodes:

       banpapp2
       banpapp3

   Is it correct (yes/no) [yes]?  yes
Authenticating Requests to Add Nodes
 >>> Authenticating Requests to Add Nodes <<<

   Once the first node establishes itself as a single node cluster,
   other nodes attempting to add themselves to the cluster configuration
   must be found on the list of nodes you just provided. You can modify
   this list by using claccess(1CL) or other tools once the cluster has
   been established.

   By default, nodes are not securely authenticated as they attempt to
   add themselves to the cluster configuration. This is generally
   considered adequate, since nodes which are not physically connected
   to the private cluster interconnect will never be able to actually
   join the cluster. However, DES authentication is available. If DES
   authentication is selected, you must configure all necessary
   encryption keys before any node will be allowed to join the cluster
   (see keyserv(1M), publickey(4)).

   Do you need to use DES authentication (yes/no) [no]?  no
Minimum Number of Private Networks
 >>> Minimum Number of Private Networks <<<

   Each cluster is typically configured with at least two private
   networks. Configuring a cluster with just one private interconnect
   provides less availability and will require the cluster to spend more
   time in automatic recovery if that private interconnect fails. 

   Should this cluster use at least two private networks (yes/no) [yes]?  yes
Point-to-Point Cables
 >>> Point-to-Point Cables <<<

   The two nodes of a two-node cluster may use a directly-connected
   interconnect. That is, no cluster switches are configured. However,
   when there are greater than two nodes, this interactive form of
   scinstall assumes that there will be exactly two cluster switches.

   Since this is not a two-node cluster, you will be asked to configure
   two switches.

Press Enter to continue:
Cluster Switches
 >>> Cluster Switches <<<

   All cluster transport adapters in this cluster must be cabled to a
   "switch". And, each adapter on a given node must be cabled to a
   different switch. Interactive scinstall requires that you identify
   two switches for use in the cluster and the two transport adapters on
   each node to which they are cabled.

   What is the name of the first switch in the cluster [switch1]?
   What is the name of the second switch in the cluster [switch2]?
Cluster Transport Adapters and Cables
 >>> Cluster Transport Adapters and Cables <<<

   You must configure at least two cluster transport adapters for each
   node in the cluster. These are the adapters which attach to the
   private cluster interconnect.

   What is the name of the first cluster transport adapter?  nxge5

   Will this be a dedicated cluster transport adapter (yes/no) [yes]?  yes

   All transport adapters support the "dlpi" transport type. Ethernet
   and Infiniband adapters are supported only with the "dlpi" transport;
   however, other adapter types may support other types of transport.

   Is "nxge5" an Ethernet adapter (yes/no) [no]?  yes

   The "dlpi" transport type will be set for this cluster.

   Name of the switch to which "nxge5" is connected [switch1]?

   Each adapter is cabled to a particular port on a switch. And, each
   port is assigned a name. You can explicitly assign a name to each
   port. Or, for Ethernet and Infiniband switches, you can choose to
   allow scinstall to assign a default name for you. The default port
   name assignment sets the name to the node number of the node hosting
   the transport adapter at the other end of the cable.

   Use the default port name for the "nxge5" connection (yes/no) [yes]?  yes

   What is the name of the second cluster transport adapter?  nxge9

   Will this be a dedicated cluster transport adapter (yes/no) [yes]?  yes

   Name of the switch to which "nxge9" is connected [switch2]?

   Use the default port name for the "nxge9" connection (yes/no) [yes]?  yes
Network Address for the Cluster Transport
 >>> Network Address for the Cluster Transport <<<

   The cluster transport uses a default network address of 172.16.0.0. If
   this IP address is already in use elsewhere within your enterprise,
   specify another address from the range of recommended private
   addresses (see RFC 1918 for details).

   The default netmask is 255.255.240.0. You can select another netmask,
   as long as it minimally masks all bits that are given in the network
   address. 
   The default private netmask and network address result in an IP
   address range that supports a cluster with a maximum of 64 nodes, 10
   private networks and 0 virtual clusters.

   Is it okay to accept the default network address (yes/no) [yes]?  yes

   Is it okay to accept the default netmask (yes/no) [yes]?  yes
Global Devices File System
 >>> Global Devices File System <<<

   Each node in the cluster must have a local file system mounted on
   /global/.devices/node@<nodeID> before it can successfully participate
   as a cluster member. Since the "nodeID" is not assigned until
   scinstall is run, scinstall will set this up for you.

   You must supply the name of either an already-mounted file system or a
   raw disk partition which scinstall can use to create the global     
   devices file system. This file system or partition should be at least
   512 MB in size.

   Alternatively, you can use a loopback file (lofi), with a new file
   system, and mount it on /global/.devices/node@<nodeid>. 

   If an already-mounted file system is used, the file system must be
   empty. If a raw disk partition is used, a new file system will be
   created for you.

   If the lofi method is used, scinstall creates a new 100 MB file system
   from a lofi device by using the file /.globaldevices. The lofi method     
   is typically preferred, since it does not require the allocation of a
   dedicated disk slice.

   The default is to use /globaldevices.

   Is it okay to use this default (yes/no) [yes]?  yes
Set Global Fencing
 >>> Set Global Fencing <<<

   Fencing is a mechanism that a cluster uses to protect data integrity 
   when the cluster interconnect between nodes is lost. By default,
   fencing is turned on for global fencing, and each disk uses the global
   fencing setting. This screen allows you to turn off the global
   fencing.

   Most of the time, leave fencing turned on. However, turn off fencing
   when at least one of the following conditions is true: 1) Your shared
   storage devices, such as Serial Advanced Technology Attachment (SATA)
   disks, do not support SCSI; 2) You want to allow systems outside your
   cluster to access storage devices attached to your cluster; 3) Sun 
   Microsystems has not qualified the SCSI persistent group reservation
   (PGR) support for your shared storage devices. 

   If you choose to turn off global fencing now, after your cluster
   starts you can still use the cluster(1CL) command to turn on global
   fencing.

   Do you want to turn off global fencing (yes/no) [no]?  no
Automatic Reboot
 >>> Automatic Reboot <<<

   Once scinstall has successfully initialized the Sun Cluster software
   for this machine, the machine must be rebooted. After the reboot,
   this machine will be established as the first node in the new cluster.

   Do you want scinstall to reboot for you (yes/no) [yes]?  no

   You will need to manually reboot this node in "cluster mode" after
   scinstall successfully completes.

Press Enter to continue:
Confirmation
 >>> Confirmation <<<

   Your responses indicate the following options to scinstall: 
     scinstall -i \
          -C banpapp \
          -F \
          -T node=banpapp1,node=banpapp2,node=banpapp3,authtype=sys \ 
          -w netaddr=172.16.0.0,netmask=255.255.240.0,maxnodes=64,maxprivatenets=10,numvirtualclusters=12 \
          -A trtype=dlpi,name=nxge5 -A trtype=dlpi,name=nxge9 \
          -B type=switch,name=switch1 -B type=switch,name=switch2 \
          -m endpoint=:nxge5,endpoint=switch1 \ 
          -m endpoint=:nxge9,endpoint=switch2 \
          -P task=quorum,state=INIT 

   Are these the options you want to use (yes/no) [yes]?  yes

   Do you want to continue with this configuration step (yes/no) [yes]?  yes

Checking device to use for global devices file system ... done

Initializing cluster name to "banpapp" ... done
Initializing authentication options ... done
Initializing configuration for adapter "nxge5" ... done
Initializing configuration for adapter "nxge9" ... done
Initializing configuration for switch "switch1" ... done
Initializing configuration for switch "switch2" ... done
Initializing configuration for cable ... done
Initializing configuration for cable ... done
Initializing private network address options ... done

Setting the node ID for "banpapp1" ... done (id=1)

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done

Updating nsswitch.conf ... done

Adding cluster node entries to /etc/inet/hosts ... done

Configuring IP multipathing groups ...done

mv: cannot access /usr/lib/brand/cluster/config.xml.orig

Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.040109091716
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done

Press Enter to continue:
Exit scinstall
 *** New Cluster and Cluster Node Menu ***

   Please select from any one of the following options:

       1) Create a new cluster
       2) Create just the first node of a new cluster on this machine
       3) Add this machine as a node in an existing cluster

       ?) Help with menu options
       q) Return to the Main Menu

   Option:  q

 *** Main Menu ***

   Please select from one of the following (*) options:

       1) Create a new cluster or add a cluster node
       2) Configure a cluster to be JumpStarted from this install server
     * 3) Manage a dual-partition upgrade
     * 4) Upgrade this cluster node
     * 5) Print release information for this cluster node

     * ?) Help with menu options
     * q) Quit

   Option:  q
Reboot node

Reboot node 1 of the cluster, and make sure everything is up and running before configuring additional nodes of the cluster.

Add additional nodes to the cluster

Add additional nodes to the cluster (These commands will NOT be executed on the first node of the cluster!). Execute the following:

[root@banpapp2 ~]# scinstall
Main Menu
  *** Main Menu ***

    Please select from one of the following (*) options:

     * 1) Create a new cluster or add a cluster node
       2) Configure a cluster to be JumpStarted from this install server
       3) Manage a dual-partition upgrade
       4) Upgrade this cluster node
       5) Print release information for this cluster node

     * ?) Help with menu options
     * q) Quit

   Option:  1
New Cluster and Cluster Node Menu
  *** New Cluster and Cluster Node Menu ***

    Please select from any one of the following options:

       1) Create a new cluster
       2) Create just the first node of a new cluster on this machine
       3) Add this machine as a node in an existing cluster

       ?) Help with menu options
       q) Return to the Main Menu

   Option:  3
Add a Node to an Existing Cluster
  *** Add a Node to an Existing Cluster ***

   This option is used to add this machine as a node in an already
   established cluster. If this is a new cluster, there may only be a
   single node which has established itself in the new cluster.

   Before you select this option, the Sun Cluster framework software must
   already be installed. Use the Java Enterprise System (JES) installer
   to install Sun Cluster software.

   Press Control-d at any time to return to the Main Menu.

   Do you want to continue (yes/no) [yes]?  yes
Typical or Custom Mode
 >>> Typical or Custom Mode <<<

   This tool supports two modes of operation, Typical mode and Custom.
   For most clusters, you can use Typical mode. However, you might need
   to select the Custom mode option if not all of the Typical defaults
   can be applied to your cluster.

   For more information about the differences between Typical and Custom
   modes, select the Help option from the menu.

   Please select from one of the following options:

       1) Typical
       2) Custom

       ?) Help
       q) Return to the Main Menu

   Option [1]:  2
Sponsoring Node
  >>> Sponsoring Node <<<

   For any machine to join a cluster, it must identify a node in that
   cluster willing to "sponsor" its membership in the cluster. When
   configuring a new cluster, this "sponsor" node is typically the first
   node used to build the new cluster. However, if the cluster is already
   established, the "sponsoring" node can be any node in that cluster.

   Already established clusters can keep a list of hosts which are able
   to configure themselves as new cluster members. This machine should be
   in the join list of any cluster which it tries to join. If the list
   does not include this machine, you may need to add it by using
   claccess(1CL) or other tools.

   And, if the target cluster uses DES to authenticate new machines
   attempting to configure themselves as new cluster members, the
   necessary encryption keys must be configured before any attempt to
   join.

   What is the name of the sponsoring node?  banpapp1
Cluster Name
  >>> Cluster Name <<<

   Each cluster has a name assigned to it. When adding a node to the
   cluster, you must identify the name of the cluster you are attempting
   to join. A sanity check is performed to verify that the "sponsoring"
   node is a member of that cluster.

   What is the name of the cluster you want to join?  banpapp

   Attempting to contact "banpapp1" ... done

   Cluster name "banpapp" is correct.

Press Enter to continue:
Check
  >>> Check <<<

   This step allows you to run cluster check to verify that certain basic
   hardware and software pre-configuration requirements have been met. If
   cluster check detects potential problems with configuring this machine
   as a cluster node, a report of violated checks is prepared and
   available for display on the screen.

   Do you want to run cluster check (yes/no) [yes]?  no
Autodiscovery of Cluster Transport
  >>> Autodiscovery of Cluster Transport <<<

   If you are using Ethernet or Infiniband adapters as the cluster
   transport adapters, autodiscovery is the best method for configuring
   the cluster transport.

   Do you want to use autodiscovery (yes/no) [yes]?  yes
Point-to-Point Cables
  >>> Point-to-Point Cables <<<

   The two nodes of a two-node cluster may use a directly-connected
   interconnect. That is, no cluster switches are configured. However,
   when there are greater than two nodes, this interactive form of
   scinstall assumes that there will be exactly one switch for each
   private network.
 
   Is this a two-node cluster (yes/no) [yes]?  no

   Since this is not a two-node cluster, you will be asked to configure
   one switch for each private network.

Press Enter to continue:
Cluster Switches
  >>> Cluster Switches <<<

   All cluster transport adapters in this cluster must be cabled to a
   "switch". And, each adapter on a given node must be cabled to a
   different switch. Interactive scinstall requires that you identify one
   switch for each private network in the cluster.

   What is the name of the first switch in the cluster [switch1]?

   What is the name of the second switch in the cluster [switch2]?
Cluster Transport Adapters and Cables
  >>> Cluster Transport Adapters and Cables <<<

   You must configure the cluster transport adapters for each node in the
   cluster. These are the adapters which attach to the private cluster
   interconnect.

   What is the name of the first cluster transport adapter?  nxge5

   Will this be a dedicated cluster transport adapter (yes/no) [yes]?  yes

   All transport adapters support the "dlpi" transport type. Ethernet and
   Infiniband adapters are supported only with the "dlpi" transport;
   however, other adapter types may support other types of transport.

   Is "nxge5" an Ethernet adapter (yes/no) [no]?  yes

   The "dlpi" transport type will be set for this cluster.

   Name of the switch to which "nxge5" is connected [switch1]?

   Each adapter is cabled to a particular port on a switch. And, each
   port is assigned a name. You can explicitly assign a name to each
   port. Or, for Ethernet and Infiniband switches, you can choose to
   allow scinstall to assign a default name for you. The default port
   name assignment sets the name to the node number of the node hosting
   the transport adapter at the other end of the cable.

   Use the default port name for the "nxge5" connection (yes/no) [yes]?  yes

   What is the name of the second cluster transport adapter?  nxge9

   Will this be a dedicated cluster transport adapter (yes/no) [yes]?  yes

   Name of the switch to which "nxge9" is connected [switch2]?

   Use the default port name for the "nxge9" connection (yes/no) [yes]?  yes
Global Devices File System
 >>> Global Devices File System <<<

   Each node in the cluster must have a local file system mounted on
   /global/.devices/node@<nodeID> before it can successfully participate
   as a cluster member. Since the "nodeID" is not assigned until
   scinstall is run, scinstall will set this up for you.

   You must supply the name of either an already-mounted file system or a
   raw disk partition which scinstall can use to create the global
   devices file system. This file system or partition should be at least
   512 MB in size.

   Alternatively, you can use a loopback file (lofi), with a new file
   system, and mount it on /global/.devices/node@<nodeid>.

   If an already-mounted file system is used, the file system must be
   empty. If a raw disk partition is used, a new file system will be
   created for you.

   If the lofi method is used, scinstall creates a new 100 MB file system
   from a lofi device by using the file /.globaldevices. The lofi method
   is typically preferred, since it does not require the allocation of a
   dedicated disk slice.

   The default is to use /globaldevices.

   Is it okay to use this default (yes/no) [yes]?  yes


Quorum Configuration
 >>> Quorum Configuration <<<

   Every two-node cluster requires at least one quorum device. By
   default, scinstall selects and configures a shared disk quorum device
   for you.

   This screen allows you to disable the automatic selection and 
   configuration of a quorum device.

   You have chosen to turn on the global fencing. If your shared storage
   devices do not support SCSI, such as Serial Advanced Technology
   Attachment (SATA) disks, or if your shared disks do not support 
   SCSI-2, you must disable this feature.

   If you disable automatic quorum device selection now, or if you intend
   to use a quorum device that is not a shared disk, you must instead use
   clsetup(1M) to manually configure quorum once both nodes have joined
   the cluster for the first time.

   Do you want to disable automatic quorum device selection (yes/no) [no]?  no
Automatic Reboot
 >>> Automatic Reboot <<<

   Once scinstall has successfully initialized the Sun Cluster software
   for this machine, the machine must be rebooted. The reboot will cause
   this machine to join the cluster for the first time.

   Do you want scinstall to reboot for you (yes/no) [yes]?  no

   You will need to manually reboot this node in "cluster mode" after
   scinstall successfully completes.

Press Enter to continue:
Confirmation
 >>> Confirmation <<<

   Your responses indicate the following options to scinstall:

     scinstall -i \
          -C banpapp \
          -N banpapp1 \
          -A trtype=dlpi,name=nxge5 -A trtype=dlpi,name=nxge9 \
          -m endpoint=:nxge5,endpoint=switch1 \
          -m endpoint=:nxge9,endpoint=switch2

   Are these the options you want to use (yes/no) [yes]?  yes

   Do you want to continue with this configuration step (yes/no) [yes]?  yes

Checking device to use for global devices file system ... done

Adding node "banpapp2" to the cluster configuration ... done
Adding adapter "nxge5" to the cluster configuration ... done
Adding adapter "nxge9" to the cluster configuration ... done
Adding cable to the cluster configuration ... done
Adding cable to the cluster configuration ... done

Copying the config from "banpapp1" ... done

Copying the postconfig file from "banpapp1" if it exists ... done

Setting the node ID for "banpapp2" ... done (id=2)

Verifying the major number for the "did" driver with "banpapp1" ... done

Checking for global devices global file system ... done
Updating vfstab ... done

Verifying that NTP is configured ... done

Updating nsswitch.conf ... done

Adding cluster node entries to /etc/inet/hosts ... done

Configuring IP multipathing groups ...done

mv: cannot access /usr/lib/brand/cluster/config.xml.orig

Verifying that power management is NOT configured ... done
Unconfiguring power management ... done
/etc/power.conf has been renamed to /etc/power.conf.040109095704
Power management is incompatible with the HA goals of the cluster.
Please do not attempt to re-configure power management.

Ensure that the EEPROM parameter "local-mac-address?" is set to "true" ... done

Ensure network routing is disabled ... done

Updating file ("ntp.conf.cluster") on node banpapp1 ... done
Updating file ("hosts") on node banpapp1 ... done

Press Enter to continue:
Exit scinstall
 *** New Cluster and Cluster Node Menu ***

   Please select from any one of the following options:

       1) Create a new cluster
       2) Create just the first node of a new cluster on this machine
       3) Add this machine as a node in an existing cluster

       ?) Help with menu options
       q) Return to the Main Menu

   Option:  q

 *** Main Menu ***

   Please select from one of the following (*) options:

       1) Create a new cluster or add a cluster node
       2) Configure a cluster to be JumpStarted from this install server
     * 3) Manage a dual-partition upgrade
     * 4) Upgrade this cluster node
     * 5) Print release information for this cluster node

     * ?) Help with menu options
     * q) Quit

   Option:  q

Enable Cluster

Once all of the servers are part of the cluster, follow the following steps to fully enable the cluster.

[root@banpapp1 ~]# clsetup
Initial Cluster Setup
 >>> Initial Cluster Setup <<<

   This program has detected that the cluster "installmode" attribute is
   still enabled. As such, certain initial cluster setup steps will be
   performed at this time. This includes adding any necessary quorum
   devices, then resetting both the quorum vote counts and the
   "installmode" property.

   Please do not proceed if any additional nodes have yet to join the
   cluster.

   Is it okay to continue (yes/no) [yes]?  yes

   Do you want to add any quorum devices (yes/no) [yes]?  yes

   Following are supported Quorum Devices types in Sun Cluster. Please
   refer to Sun Cluster documentation for detailed information on these
   supported quorum device topologies.

   What is the type of device you want to use?

       1) Directly attached shared disk
       2) Network Attached Storage (NAS) from Network Appliance
       3) Quorum Server

       q) Return to the quorum menu

   Option:  1
Add a Shared Disk Quorum Device
 >>> Add a Shared Disk Quorum Device <<<

   If you are using a dual-ported disk, by default, Sun Cluster uses
   SCSI-2. If you are using disks that are connected to more than two
   nodes, or if you manually override the protocol from SCSI-2 to SCSI-3,
   by default, Sun Cluster uses SCSI-3.

   If you turn off SCSI fencing for disks, Sun Cluster uses software
   quorum, which is Sun Cluster software that emulates a form of SCSI
   Persistent Group Reservations (PGR).

   Warning: If you are using disks that do not support SCSI, such as
   Serial Advanced Technology Attachment (SATA) disks, turn off SCSI
   fencing.

   For more information about supported quorum device topologies, see the
   Sun Cluster documentation.

   Is it okay to continue (yes/no) [yes]?  yes

   Which global device do you want to use (d<N>)?  d19

   Is it okay to proceed with the update (yes/no) [yes]?  yes

clquorum add d19

   Command completed successfully.

Press Enter to continue:

   Do you want to add another quorum device (yes/no) [yes]?  yes

   Which global device do you want to use (d<N>)?  d20

   Is it okay to proceed with the update (yes/no) [yes]?  yes

clquorum add d20

   Command completed successfully.

Press Enter to continue:

   Do you want to add another quorum device (yes/no) [yes]?  yes

   Which global device do you want to use (d<N>)?  d21

   Is it okay to proceed with the update (yes/no) [yes]?  yes

clquorum add d21

   Command completed successfully.

Press Enter to continue:

   Do you want to add another quorum device (yes/no) [yes]?  yes

   Which global device do you want to use (d<N>)?  d22

   Is it okay to proceed with the update (yes/no) [yes]?  yes

clquorum add d22

   Command completed successfully.

Press Enter to continue:

   Do you want to add another quorum device (yes/no) [yes]?  yes

   Which global device do you want to use (d<N>)?  d23

   Is it okay to proceed with the update (yes/no) [yes]?  yes

clquorum add d23

   Command completed successfully.

Press Enter to continue:

   Do you want to add another quorum device (yes/no) [yes]?  no

   Once the "installmode" property has been reset, this program will skip
   "Initial Cluster Setup" each time it is run again in the future.
   However, quorum devices can always be added to the cluster using the
   regular menu options. Resetting this property fully activates quorum
   settings and is necessary for the normal and safe operation of the
   cluster.

   Is it okay to reset "installmode" (yes/no) [yes]?  yes

clquorum reset
claccess deny-all

   Cluster initialization is complete.

   Type ENTER to proceed to the main menu:
Exit clsetup
 *** Main Menu ***

   Please select from one of the following options:

       1) Quorum
       2) Resource groups
       3) Data Services
       4) Cluster interconnect
       5) Device groups and volumes
       6) Private hostnames
       7) New nodes
       8) Other cluster tasks

       ?) Help with menu options
       q) Quit

   Option:  q

--Tom Stevenson (talk) 13:51, 16 April 2013 (EDT)

Help contents:

Reading: Go | Search | URL | Namespace | Page name | Section | Link | Backlinks | Piped link | Interwiki link | Redirect | Variable | Category | Special page
Tracking changes: Recent | (enhanced) | Related | Watching pages | Page history | Diff | User contributions | Edit summary | Minor edit | Patrolled edit
Logging in and preferences: Logging in | Preferences | User style
Editing: Overview | Wikitext | New page | List | Images/files | Image page | Special characters | Formula | Table | EasyTimeline | Inputbox | Template | (p. 2) | Renaming (moving) a page | Editing shortcuts | Talk page | Testing | Export | Import | rlc |