Solaris Zones

From Tom
Jump to: navigation, search

This documentation can be redistributed and/or modified under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version.

Unless required by applicable law, this documentation is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

This documentation should not be used as a replacement for a valid Oracle service contract and/or an Oracle service engagement. Failure to follow Oracle guidelines for installation and/or maintenance could result in service/warranty issues with Oracle.

Use of this documentation is at your own risk!

--Tom Stevenson (talk) 17:11, 26 May 2015 (EDT)


Index

Banner 8 setups			 (Still a work in progress)
T5440 Setup			 (Still a work in progress) 
M5000 Setup			 (Still a work in progress) 
Solaris 10 Setup		 (Still a work in progress) 
Fair Share Scheduler		 (Still a work in progress) 
Resource Pools			 (Still a work in progress) 
Solaris Cluster 3.2		 (Still a work in progress) 
Solaris Zones			 (Still a work in progress) 
Patching Cluster with HA-Zones	 (Still a work in progress) 

Setup Zones

Unlike most other procedures, for the most part, these steps are only executed on the primary node the zone will run on. When steps need to be execute on more than one node, the instructions will indicate that information.

Create the /wsu/cit/ecs/local file system on jumpstart1

On jumpstart1, execute the following to create the zone's /wsu/cit/ecs/local file system:

[root@jumpstart1 ~]# zfs create hosts/export_wsu_cit_ecs_hosts/edi2
[root@jumpstart1 ~]# zfs set sharenfs='rw=edi2:edi2.wayne.edu,anon=0' hosts/export_wsu_cit_ecs_hosts/edi2

Setup the zonecfg files

For each zone, create the file zonecfg with the following path:

/global/zones/config/ZONE_NAME/zonecfg

Where ZONE_NAME is the name of the zone to be created. For example, for zone edi2, the path would be:

/global/zones/config/edi2/zonecfg

Here is the configuration file that was used for edi2:

[root@banpapp1 config]# cat /global/zones/config/edi2/zonecfg
create -b

set zonepath=/zones/hosts/edi2/os
set autoboot=false
set ip-type=shared

add dedicated-cpu
set ncpus=1-4
set importance=1
end

add fs
set dir=/global
set special=/global
set type=lofs
add options rw
end

add fs
set dir=/opt/local
set special=/zones/mountpoints/edi2/opt_local
set type=lofs
add options rw
end

add fs
set dir=/var/local
set special=/zones/mountpoints/edi2/var_local
set type=lofs
add options rw
end

add fs
set dir=/usr/openv
set special=/zones/mountpoints/edi2/usr_openv
set type=lofs
add options rw
end

add net
set address=141.217.0.25/23
set physical=nxge7
set defrouter=141.217.1.4
end

Create the SVM file systems for the zone

Only execute the following commands on the primary node that this zone will execute from.

[root@banpapp1 ~]# metaset -s edi2 -a -h banpapp1 banpapp2 banpapp3
[root@banpapp1 ~]# metaset -s edi2 -a /dev/did/rdsk/d6
[root@banpapp1 ~]# metaset -s edi2

Set name = edi2, Set number = 3

Host                Owner
  banpapp1           Yes
  banpapp2
  banpapp3

Driv Dbase

d6   Yes

[root@banpapp1 ~]# metainit -s edi2 d0 1 1 /dev/did/rdsk/d6s0
edi2/d0: Concat/Stripe is setup

[root@banpapp1 ~]# metainit -s edi2 d1 -p d0 20g
d1: Soft Partition is setup

[root@banpapp1 ~]# metainit -s edi2 d2 -p d0 6g
d2: Soft Partition is setup

Build the file systems

[root@banpapp1 ~]# newfs -m 0 -o time /dev/md/edi2/rdsk/d1
/dev/md/edi2/rdsk/d1: Unable to find Media type. Proceeding with system determined parameters.
newfs: construct a new file system /dev/md/edi2/rdsk/d1: (y/n)? y
/dev/md/edi2/rdsk/d1: Unable to find Media type. Proceeding with system determined parameters.
Warning: 2048 sector(s) in last cylinder unallocated
/dev/md/edi2/rdsk/d1:   41943040 sectors in 6827 cylinders of 48 tracks, 128 sectors
        20480.0MB in 427 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
........
super-block backups for last 10 cylinder groups at:
 40997024, 41095456, 41193888, 41292320, 41390752, 41489184, 41587616,
 41686048, 41784480, 41882912

[root@banpapp1 ~]# newfs -m 0 -o time /dev/md/edi2/rdsk/d2
/dev/md/edi2/rdsk/d2: Unable to find Media type. Proceeding with system determined parameters.
newfs: construct a new file system /dev/md/edi2/rdsk/d2: (y/n)? y
/dev/md/edi2/rdsk/d2:   12582912 sectors in 1024 cylinders of 16 tracks, 768 sectors
        6144.0MB in 114 cyl groups (9 c/g, 54.00MB/g, 6592 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 111392, 222752, 334112, 445472, 556832, 668192, 779552, 890912, 1002272,
 11507744, 11619104, 11730464, 11841824, 11953184, 12064544, 12175904,
 12287264, 12386336, 12497696

Update the /etc/vfstab on each node of the cluster

Add the following lines to the /etc/vfstab file on every node in the cluster:

/dev/md/edi2/dsk/d1	/dev/md/edi2/rdsk/d1		/zones/hosts/edi2		ufs	2	no	-
/dev/md/edi2/dsk/d2	/dev/md/edi2/rdsk/d2		/zones/mountpoints/edi2		ufs	3	no	-

Make the mountpoints on each node

Make the following directory mountpoints on each of the node in the cluster:

[root@banpapp1 ~]# mkdir -p /zones/hosts/edi2 /zones/mountpoints/edi2

Mount the file systems on the primary node

On the primary host for this zone, mount each of the new file systems:

[root@banpapp1 ~]# mount /zones/hosts/edi2
[root@banpapp1 ~]# mount /zones/mountpoints/edi2

Create the OS directory

On the primary host for this zone, make a directory under the hosts file system. During a live upgrade of the zone, it will be this directory, and not the underlining file system that will get renamed. Permit the directory such that only root has access to it.

[root@banpapp1 ~]# mkdir /zones/hosts/edi2/os
[root@banpapp1 ~]# chmod 700 /zones/hosts/edi2/os

Create the system application directories

On the primary host for this zone, make directories for the zone's /opt/local, /var/local, and /usr/openv directories. These directories with become lofs mountpoints into the zone.

[root@banpapp1 ~]# mkdir /zones/mountpoints/edi2/opt_local /zones/mountpoints/edi2/var_local /zones/mountpoints/edi2/usr_openv

Configure the zone

On the primary host for this zone, configure the zone, verify the syntax (zonecfg -z edi2 verify), and then verify the semantics (zoneadm -z edi2 verify):

[root@banpapp1 ~]# zonecfg -z edi2 -f /global/zones/config/edi2/zonecfg
[root@banpapp1 ~]# zonecfg -z edi2 verify
[root@banpapp1 ~]# zoneadm -z edi2 verify

If the two verifies report no errors or warnings, install the OS in the zone:

[root@banpapp1 ~]# zoneadm -z edi2 install
Preparing to install zone <edi2>.
Creating list of files to copy from the global zone.
Copying <128209> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1197> packages on the zone.
Initialized <1197> packages on zone.
Zone <edi2> is initialized.
Installation of <3> packages was skipped.
The file </zones/hosts/edi2/os/root/var/sadm/system/logs/install_log> contains a log of the zone installation.

Display the status of the zone

[root@banpapp1 ~]# zoneadm -z edi2 list -v
  ID NAME             STATUS     PATH                           BRAND    IP
   - edi2             installed  /zones/hosts/edi2/os           native   shared

Ready and boot the zone

Ready the zone.

[root@banpapp1 ~]# zoneadm -z edi2 ready

If there are no errors or warnings, boot the zone.

[root@banpapp1 ~]# zoneadm -z edi2 boot

Log into the zone console

Log into the console for the zone, and configure the OS (NIS, NFS, Root password, etc.). The steps to configure the OS are left to the SysAdmin installing the OS.

[root@banpapp1 ~]# zlogin -C edi2

JASS the zone

After the OS is configured, the zone is rebooted. After the zone is back up, login, and JASS the zone using the suncluster3x-secure.driver driver.

svc.startd: The system is down.

[NOTICE: Zone rebooting]

SunOS Release 5.10 Version Generic_138888-07 64-bit
Copyright 1983-2009 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hostname: edi2
NIS domain name is wayne.edu
Reading ZFS config: done.

edi2 console login: root
Password:
Last login: Thu Apr  2 19:34:11 on console
Apr  2 19:36:05 edi2 login: ROOT LOGIN /dev/console
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
# cd /net/jumpstart/export/jumpstart/
# ./bin/jass-execute -d suncluster3x-secure.driver

Skipping most of the JASS output.

Are you sure that you want to continue? (yes/no): [no]
yes

Skipping the most of the JASS output.

The SUMMARY output for the JASS process should look something like the following:

[SUMMARY] Results Summary for APPLY run of suncluster3x-secure.driver
[SUMMARY] The run completed with a total of 100 scripts run.
[SUMMARY] There were  Failures in   0 Scripts
[SUMMARY] There were  Errors   in   0 Scripts
[SUMMARY] There were  Warnings in   0 Scripts
[SUMMARY] There were  Notes    in  81 Scripts

[SUMMARY] Notes Scripts listed in:
        /var/opt/SUNWjass/run/20090402193839/jass-script-notes.txt

If the JASS process was successful, reboot the zone.

# init 6

Disable NTP

After the zone is rebooted, login, and disable NTP (NTP is not supported in zones).

[root@edi2 ~]# rm /etc/inet/ntp.conf
[root@edi2 ~]# svcadm -v disable -s svc:/network/ntp:default

Verify the LOFS filesystems

Verify that the LOFS filesystems, /global, /opt/local, /var/local, and /usr/openv have been mounted in the zone:

[root@edi2 ~]# df -h -F lofs
Filesystem             size   used  avail capacity  Mounted on
/dev                    20G   3.6G    16G    19%    /dev
/global                 16G   4.5G    11G    29%    /global
/opt/local             5.9G   6.0M   5.9G     1%    /opt/local
/usr/openv             5.9G   6.0M   5.9G     1%    /usr/openv
/var/local             5.9G   6.0M   5.9G     1%    /var/local
/platform/SUNW,T5440/lib/libc_psr/libc_psr_hwcap2.so.1
                        20G   3.6G    16G    19%    /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,T5440/lib/sparcv9/libc_psr/libc_psr_hwcap2.so.1
                        20G   3.6G    16G    19%    /platform/sun4v/lib/sparcv9/libc_psr.so.1

Install OpenSource Packages

[root@edi2 ~]# install-pkgs

Skipping the all of the install-pkgs output.

Setting up the Runtime Linking Environment

The runtime linking environment defines which libraries can be used by system daemons.

Verify the runtime linking environment

First verify that the runtime linking environment has been setup by using the /usr/bin/crle command (configure runtime linking environment). This example shows the output for a system which has NOT been configured. In this example the file /var/ld/ld.config is not found, and the Default Library Path (ELF) is set to the system default of /lib:/usr/lib.

[root@banpcold1 ~]# /usr/bin/crle

Default configuration file (/var/ld/ld.config) not found
  Default Library Path (ELF):   /lib:/usr/lib  (system default)
  Trusted Directories (ELF):    /lib/secure:/usr/lib/secure  (system default)

Setup the runtime linking environment

If the runtime linking environment has not yet been setup (see the output above), use the following command to configure it:

[root@banpcold1 ~]# /usr/bin/crle -c /var/ld/ld.config -l /lib:/usr/lib:/opt/local/lib

Next, verify that the runtime linking environment has been setup. The output should show that file /var/ld/ld.config is being used, and that the Default Library Path (ELF) is no longer the system default, but has been replaced with /lib:/usr/lib:/opt/local/lib. This path includes our additional libraries from /opt/local/lib. The output also list the command that was used to reconfigure the runtime linking environment, which should be crle -c /var/ld/ld.config -l /lib:/usr/lib:/opt/local/lib:

[root@banpcold1 ~]# /usr/bin/crle

Configuration file [version 4]: /var/ld/ld.config
  Default Library Path (ELF):   /lib:/usr/lib:/opt/local/lib
  Trusted Directories (ELF):    /lib/secure:/usr/lib/secure  (system default)

Command line:
  crle -c /var/ld/ld.config -l /lib:/usr/lib:/opt/local/lib

Place zone under Sun Cluster control

The Sun Cluster definition will include, at minimum, one resource for the HASP, and one resource for the ZONE.

Create a Resource Group for the zone

[root@banpapp1 ~]# cluster_add_rg EDI2 banpapp1 banpapp2,banpapp3

/usr/cluster/bin/scrgadm -a -g EDI2-rg -h banpapp1,banpapp2,banpapp3
/usr/cluster/bin/scswitch -o -g EDI2-rg

Use the output from the cluster_add_rg command to create and manage the Resource Group.

[root@banpapp1 ~]# /usr/cluster/bin/scrgadm -a -g EDI2-rg -h banpapp1,banpapp2,banpapp3
[root@banpapp1 ~]# /usr/cluster/bin/scswitch -o -g EDI2-rg

Validate that the Resource has been created.

[root@banpapp1 ~]# scstat -pvv | grep -i edi2
  Device group servers:  edi2                banpapp1            banpapp2
  Device group spares:        edi2                banpapp3
  Device group inactives:     edi2                -
  Device group transitions:   edi2                -
  Device group status:        edi2                Online
 Resources: EDI2-rg        -
     Group: EDI2-rg        banpapp1                 Offline        No
     Group: EDI2-rg        banpapp2                 Offline        No
     Group: EDI2-rg        banpapp3                 Offline        No

Define the HAStoragePlus on one node of the cluster

On one node of the cluster, define the HAStoragePlus agent. This step is only executed once on one node of the cluster.

[root@banpapp1 ~]# scrgadm -a -t SUNW.HAStoragePlus

Create a Resource for the HASP

The HASP resource tells Sun Cluster which File Systems are required to be online for the Resource Group to function. Use the cluster_add_hasp_rs command to generate the required Sun Cluster commands. The zone require two File System, one for the OS, and one for the additional three non-OS directories for the zone. One HASP can handle multiple File Systesms.

[root@banpapp1 ~]# cluster_add_hasp_rs EDI2 banpapp1 -f /zones/hosts/edi2,/zones/mountpoints/edi2

/usr/cluster/bin/scswitch -z -g EDI2-rg -h banpapp1

/usr/cluster/bin/scrgadm -a -j hasp-EDI2-rs -g EDI2-rg -t SUNW.HAStoragePlus -x \
   FilesystemMountPoints=/zones/hosts/edi2,/zones/mountpoints/edi2 -x AffinityOn=True

/usr/cluster/bin/scswitch -e -j hasp-EDI2-rs

Execute the above three commands.

[root@banpapp1 ~]# /usr/cluster/bin/scswitch -z -g EDI2-rg -h banpapp1
[root@banpapp1 ~]# /usr/cluster/bin/scrgadm -a -j hasp-EDI2-rs -g EDI2-rg -t SUNW.HAStoragePlus -x \
                   FilesystemMountPoints=/zones/hosts/edi2,/zones/mountpoints/edi2 -x AffinityOn=True
[root@banpapp1 ~]# /usr/cluster/bin/scswitch -e -j hasp-EDI2-rs

Verify that the Resource Group and the HASP is online on the primary node.

[root@banpapp1 ~]# scstat -pvv | grep -i edi2
  Device group servers:  edi2                banpapp1            banpapp2
  Device group spares:        edi2                banpapp3
  Device group inactives:     edi2                -
  Device group transitions:   edi2                -
  Device group status:        edi2                Online
 Resources: EDI2-rg        hasp-EDI2-rs
     Group: EDI2-rg        banpapp1                 Online         No
     Group: EDI2-rg        banpapp2                 Offline        No
     Group: EDI2-rg        banpapp3                 Offline        No
  Resource: hasp-EDI2-rs   banpapp1                 Online         Online
  Resource: hasp-EDI2-rs   banpapp2                 Offline        Offline
  Resource: hasp-EDI2-rs   banpapp3                 Offline        Offline

Create a Resource for the ZONE

[root@banpapp1 ~]# cd /global/zones/config/edi2
[root@banpapp1 edi2]# cat sczbt_config
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the License).
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/CDDL.txt
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/CDDL.txt.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets [] replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#

#
# Copyright 2008 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)sczbt_config       1.6     08/04/15 SMI"
#
# This file will be sourced in by sczbt_register and the parameters
# listed below will be used.
#
# These parameters can be customized in (key=value) form
#
#               RS - Name of the resource
#               RG - Name of the resource group containing RS
#     PARAMETERDIR - Name of the parameter file direcrory
#       SC_NETWORK - Identfies if SUNW.LogicalHostname will be used
#                       true = zone will use SUNW.LogicalHostname
#                      false = zone will use it's own configuration
#
#               NOTE: If the ip-type keyword for the non-global zone is set
#                     to "exclusive", only "false" is allowed for SC_NETWORK
#
#       The configuration of a zone's network addresses depends on
#         whether you require IPMP protection or protection against
#         the failure of all physical interfaces.
#
#       If you require only IPMP protection, configure the zone's
#         addresses by using the zonecfg utility and then place the
#         zone's address in an IPMP group.
#
#               To configure this option set
#                 SC_NETWORK=false
#                 SC_LH=
#
#       If IPMP protection is not required, just configure the
#         zone's addresses by using the zonecfg utility.
#
#               To configure this option set
#                 SC_NETWORK=false
#                 SC_LH=
#
#       If you require protection against the failure of all physical
#         interfaces, choose one option from the following list.
#
#       - If you want the SUNW.LogicalHostName resource type to manage
#           the zone's addresses, configure a SUNW.LogicalHostName
#           resource with at least one of the zone's addresses.
#
#               To configure this option set
#                 SC_NETWORK=true
#                 SC_LH=<Name of the SC Logical Hostname resource>
#
#       - Otherwise, configure the zone's addresses by using the
#           zonecfg utility and configure a redundant IP address
#           for use by a SUNW.LogicalHostName resource.
#
#               To configure this option set
#                 SC_NETWORK=false
#                 SC_LH=<Name of the SC Logical Hostname resource>
#
#       Whichever option is chosen, multiple zone addresses can be
#         used either in the zone's configuration or using several
#         SUNW.LogicalHostname resources.
#
#          e.g. SC_NETWORK=true
#               SC_LH=zone1-lh1,zone1-lh2
#
#            SC_LH - Name of the SC Logical Hostname resource
#         FAILOVER - Identifies if the zone's zone path is on a
#                      highly available local file system
#
#         e.g.  FAILOVER=true - highly available local file system
#               FAILOVER=false - local file system
#
#           HAS_RS - Name of the HAStoragePlus SC resource
#

RS=zone-EDI2-rs
RG=EDI2-rg
PARAMETERDIR=/global/zones/config/edi2/pfiles
SC_NETWORK=false
SC_LH=
FAILOVER=true
HAS_RS=hasp-EDI2-rs 

#
# The following variable will be placed in the parameter file
#
# Parameters for sczbt (Zone Boot)
#
# Zonename      Name of the zone
# Zonebrand     Brand of the zone. Current supported options are
#               "native" (default), "lx", "solaris8" or "solaris9"
# Zonebootopt   Zone boot options ("-s" requires that Milestone=single-user)
# Milestone     SMF Milestone which needs to be online before the zone is
#               considered booted. This option is only used for the
#               "native" Zonebrand.
# LXrunlevel    Runlevel which needs to get reached before the zone is
#               considered booted. This option is only used for the "lx"
#               Zonebrand.
# SLrunlevel    Solaris legacy runlevel which needs to get reached before the
#               zone is considered booted. This option is only used for the
#               "solaris8" or "solaris9" Zonebrand.
# Mounts        Mounts is a list of directories and their mount options,
#               which are loopback mounted from the global zone into the
#               newly booted zone. The mountpoint in the local zone can
#               be different to the mountpoint from the global zone.
#
#               The Mounts parameter format is as follows,
#
#               Mounts="/<global zone directory>:/<local zone directory>:<mount options>"
#
#               The following are valid examples for the "Mounts" variable
#
#               Mounts="/globalzone-dir1:/localzone-dir1:rw"
#               Mounts="/globalzone-dir1:/localzone-dir1:rw /globalzone-dir2:rw"
#
#               The only required entry is the /<global zone directory>, the
#                /<local zone directory> and <mount options> can be omitted.
#
#               Omitting /<local zone directory> will make the local zone
#               mountpoint the same as the global zone directory.
#
#               Omitting <mount options> will not provide any mount options
#               except the default options from the mount command.
#
#               Note: You must manually create any local zone mountpoint
#                     directories that will be used within the Mounts variable,
#                     before registering this resource within Sun Cluster.
# 

Zonename="edi2"
Zonebrand="native"
Zonebootopt=""
Milestone="multi-user-server"
LXrunlevel="3"
SLrunlevel="3"
Mounts=""

Using the sczbt_config file, register the zone to the cluster.

[root@banpapp1 edi2]# sczbt_register -f /global/zones/config/edi2/sczbt_config
[root@banpapp1 edi2]# scswwitch -e -j zone-EDI2-rs

Adding a network interface to a running ZONE

For each network interface to be added to a zone, do the following:

  1. At minimum, update the "/etc/inet/hosts" file in each global zone of the cluster, plus any zone that will reference new IP name/address (both the zone running the network interface, and any other server referencing that interface).
  2. Update the zonecfg database for the zone in each global zone of the cluster to reflect the new network interface.
  3. Export one of the zonecfg database for the zone to the "/global/zone/config/ZONE_NAME/zonecfg" file, where ZONE_NAME is the name of the zone.
  4. Using the Network Interface name from the "zonecfg" file, configure the Network Interface on the global zone running the zone.

Using zone bantinb1 and bantinb1's netbackup interface (nxge0) as an example, the following command would be executed on the global zone running bantinb1 (bantapp2):

[root@bantapp2 ~]# ifconfig nxge0 addif bantinb1-bk up zone bantinb1
Created new logical interface nxge0:5

[root@bantapp2 ~]# ifconfig nxge0:5
nxge0:5: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
       zone bantinb1
       inet 172.20.8.171 netmask ffff0000 broadcast 172.20.255.255

On zone bantinb1, verify that the network interface has been brought online:

[root@bantinb1 ~]# ifconfig nxge0:5
nxge0:5: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
       inet 172.20.8.171 netmask ffff0000 broadcast 172.20.255.255



--Tom Stevenson (talk) 13:52, 16 April 2013 (EDT)

Help contents:

Reading: Go | Search | URL | Namespace | Page name | Section | Link | Backlinks | Piped link | Interwiki link | Redirect | Variable | Category | Special page
Tracking changes: Recent | (enhanced) | Related | Watching pages | Page history | Diff | User contributions | Edit summary | Minor edit | Patrolled edit
Logging in and preferences: Logging in | Preferences | User style
Editing: Overview | Wikitext | New page | List | Images/files | Image page | Special characters | Formula | Table | EasyTimeline | Inputbox | Template | (p. 2) | Renaming (moving) a page | Editing shortcuts | Talk page | Testing | Export | Import | rlc |