Project

General

Profile

Actions

Solaris Unattended installation

There are two ways of getting started with installing solaris via foreman.
An more or less automatic setup of everything using a script, or
the good old manual way.

The script for automatic setup currently only works on linux (Tested with RHEL 6, but should also work on other distris as well) and only with ISOs for Solaris 10 (tested with x86 and sparc).

If you want to use the manual way, scroll down a bit.
I recommend to read the manual way even if you using the script, as there might be some useful
hints, if automatic setup fails or doesnt seem to work.
(At least take a look at the troubleshooting part at the end of the page!)

Automatic Setup of Solaris media

Currently the script isn't shipped with foreman, but you can find it here

Using the script is pretty simple. You can either just run it (ensure you are using BASH).
If you are lazy (like me), you can pass the name of the Solaris ISO file as first parameter.

Then just answer the questions you get asked. The part in square-brackets is the default.
It gets used if you just press enter.

When the script is completed you may have to add the HWModel to Foreman (More->Hardware Model) (basically on SPARC).

Manual setup of Solaris media

Installation Media

First you have to identify the release name of you Solaris install media. To do this check the disc label on your Solaris DVD.

SOL_10_811_SPARC = hw0811

I recommend to create a structure to hold more than just one Solaris install media like the following:

/Solaris
/Solaris/install
/Solaris/images
/Solaris/jumpstart

Linux:

Copy the contents of the Solaris 10 Install DVD to the local install directory.

Sparc:

cp -r /media/dvd /Solaris/install/Solaris_5.10_sparc_hw0811

i386:

cp -r /media/dvd /Solaris/install/Solaris_5.10_i386_hw0811

Create a link of Solaris_5.10_i386_hw0811 to Solaris_5.10_x86_64_hw0811.

cd /Solaris/install
ln -s Solaris_5.10_i386_hw0811 Solaris_5.10_x86_64_hw0811

Note that hw0811 is the release name that has to match your Solaris install media.

Solaris:

Create a directory and run the following script from the Solaris 10 Installation DVD on a Solaris 8 / 10 machine:

mkdir -p /Solaris/install/Solaris_5.10_sparc_hw0811
cd /cdrom/cdrom0/Solaris_10/Tools
./setup_install_server /Solaris/install/Solaris_5.10_sparc_hw0811

A Solaris distribution should be declared in the same form as a Linux distribution. There should be an http based access URL, (the path variable,) so that the smart-proxy can
download the required components for the build. Currently this step has to be done manually. Simply copy the inetboot files to your tftp directory.

Sparc

cp /Solaris/install/Solaris_5.10_sparc_hw0811/Solaris_10/Tools/Boot/platform/sun4u/inetboot /var/lib/tftpboot/Solaris-5.10-hw0811-SUN4U-inetboot

i386

cp /Solaris/install/Solaris_5.10_i386_hw0811/boot/grub/pxegrub /var/lib/tftpboot/Solaris-5.10-hw0811-pxegrub

As the Solaris jumpstart process is performed via NFS rather than TFTP the distribution media must also be made available for
ReadOnly mounting on the clients.

Linux:

vi /etc/exports
"/Solaris" *(ro,async,no_root_squash,anonuid=0)

Solaris:

share -F nfs -o ro,anon=0 /Solaris
echo "share -F nfs -o ro,anon=0 /Solaris" >> /etc/dfs/dfstab

The fields describing this alternative access naming scheme are revealed on the Media page when a Solaris operating system is selected. The
Solaris build can proceed via a conventional package build, where the packages selected are the SUWNCreq minimal install, or a flash build. The flash archives are located under
the distribution directory by default but can be located anywhere that can be accessed via NFS.

Name: Solaris Install Media

Path: http://server/Solaris/install/Solaris_$major.$minor_$arch_$release
Media Path: server:/Solaris/install/Solaris_$major.$minor_$arch_$release
Config Path: server:/jumpstart
Image Path: server:/Solaris/images

Jumpstart files

The Solaris jumpstart process occurs in two phases; a diskless client is first booted and then in phase two, the host mounts its build media and configuration files from an NFS location and proceeds with the build. Foreman provides a skeleton configuration directory structure suitable for NFS mounting on the host. In this structure are files that are customised to forward configuration requests to the Foreman instance. This directory tree, located at .../foreman/extras/jumpstart, should be NFS shared to the subnet that contains any potential Solaris clients. Some customization of this directory tree may be required.

Customize dynamic_* scripts

An important step, as mentioned above, is to check if the dynamic_profile and dynamic_finish scripts fits your needs.
If your foreman host is not called "foreman" in DNS or is not reachable on port 80, you have to change the value of the "foreman" variable.

dynamic_profile (line #15):

perl -p -i -e "s/hosts:.*/hosts: files dns/" /tmp/root/etc/nsswitch.conf
# and then download our configuration from foreman
foreman="foreman" 
./curl.$arch -s http://$foreman/unattended/provision > ${SI_PROFILE}

dynamic_finish (line #4):

arch=`uname -p`
foreman=foreman
# We load the finish script into the logs directory so as to leave a record
./curl.$arch -s http://$foreman/unattended/finish > /a/var/sadm/system/logs/puppet.postinstall

See Solaris_jumpstart_directory for more details

The files are read in the following order:

1. server:/jumpstart/rules.ok
2. server:/jumpstart/dynamic_profile
3. Foreman -> Provision template: Jumpstart Default
4. server:/jumpstart/dynamic_finish
5. Foreman -> Provision template: Jumpstart Default Finish

Linux:

cp -r /usr/share/foreman/extras/jumpstart /Solaris/jumpstart
vi /etc/exports
"/Solaris/jumpstart" *(ro,async,no_root_squash,anonuid=0)

Solaris:
cp -r /usr/share/foreman/extras/jumpstart /Solaris/jumpstart
share -F nfs -o ro,anon=0 /jumpstart
echo "share -F nfs -o ro,anon=0 /jumpstart" >> /etc/dfs/dfstab

Edit Model

You need to setup a model for each Solaris Sparc host that you want to deploy.

Name: Sun Ultra 10
Hardware Model: SUN4U
Vendor Class: Ultra-5_10

Model consolidation

When Foreman imports a host that has not been configured and built by Foreman it will attempt to determine the model of that machine by analyzing the facts that are associated with the host. This can often result in many badly named models all referring to what should be a single manufacturers model. A rake task has been provided that attempts to consolidate all these duplicate malformed names into a single sensible model together with the appropriate Solaris vendor class and Solaris hardware model. See rake models::consolidate

Starting deployment

After setting up a new host in foreman start your Intel Solaris machine and boot from PXE. On Sparc machines press STOP+a during bootup and enter the following:

boot net:dhcp - install

Troubleshooting

The installer doesnt load the jumpstart template

If you get an error about an empty jumpstart-template or something like "'<!DOCTYPE' invalid...", you have to fix your default_finish and default_profile scripts.

Remove this from default_profile:

# and then download our configuration from foreman
foreman="foreman" 
./curl.$arch -s http://$foreman/unattended/provision > ${SI_PROFILE}

And add this instead:
foreman="your.foreman.host:port" 
ipaddress=`ifconfig -a | grep -v ether | grep -v zone | grep -v groupname | grep -v flags= | grep -v 0.0.0.0 | grep -v 127.0.0. | awk '{print $2}' | tail -1`

./curl.$arch -s http://$foreman/unattended/provision?spoof=$ipaddress > ${SI_PROFILE}

Remove this from default_finish:

foreman=foreman
# We load the finish script into the logs directory so as to leave a record
./curl.$arch -s http://$foreman/unattended/finish > /a/var/sadm/system/logs/puppet.postinstall

and add this instead:
foreman="your.foreman.host:port" 
ipaddress=`ifconfig -a | grep -v ether | grep -v zone | grep -v groupname | grep -v flags= | grep -v 0.0.0.0 | grep -v 127.0.0. | awk '{print $2}' | tail -1`

./curl.$arch -s http://$foreman/unattended/finish?spoof=$ipaddress > /a/var/sadm/system/logs/puppet.postinstall

Dont forget to replace the 'foreman="your.foreman.host:port'" with your foreman host.

Updated by Lukas Zapletal almost 3 years ago · 29 revisions