Project

General

Profile

Solaris Unattended installation » History » Revision 28

Revision 27 (David M., 03/15/2012 08:45 AM) → Revision 28/29 (David M., 03/15/2012 08:45 AM)

h1. Solaris Unattended installation 

 There are two ways of getting started with installing solaris via foreman.  
 An more or less automatic setup of everything using a script, or 
 the good old manual way. 

 The script for automatic setup currently only works on linux (Tested with RHEL 6, but should also work on other distris as well) and only with ISOs for Solaris 10 (tested with x86 and sparc). 

 If you want to use the manual way, scroll down a bit. 
 I recommend to read the manual way even if you using the script, as there might be some useful 
 hints, if automatic setup fails or doesnt seem to work. 
 (At least take a look at the troubleshooting part at the end of the page!) 


 h2. Automatic Setup of Solaris media 

 Currently the script isn't shipped with foreman, but you can find it "here":http://www.case-of.org/~maniac/foreman/import_solaris_disk.sh 

 Using the script is pretty simple. You can either just run it (ensure you are using BASH). 
 If you are lazy (like me), you can pass the name of the Solaris ISO file as first parameter. 

 Then just answer the questions you get asked. The part in square-brackets is the default.  
 It gets used if you just press enter. 

 When the script is completed you may have to add the HWModel to Foreman (More->Hardware Model) (basically on SPARC).  



 h2. Manual setup of Solaris media 


 h2. Installation Media 


 First you have to identify the release name of you Solaris install media. To do this check the disc label on your Solaris DVD.  

 <pre> 
 SOL_10_811_SPARC = hw0811 
 </pre> 

 I recommend to create a structure to hold more than just one Solaris install media like the following: 

 <pre> 
 /Solaris 
 /Solaris/install 
 /Solaris/images 
 /Solaris/jumpstart 
 </pre> 

 h3. Linux: 

 Copy the contents of the Solaris 10 Install DVD to the local install directory. 

 h3. Sparc: 

 <pre> 
 cp -r /media/dvd /Solaris/install/Solaris_5.10_sparc_hw0811 
 </pre> 

 h3. i386: 

 <pre> 
 cp -r /media/dvd /Solaris/install/Solaris_5.10_i386_hw0811 
 </pre> 

 Create a link of Solaris_5.10_i386_hw0811 to Solaris_5.10_x86_64_hw0811. 

 <pre> 
 cd /Solaris/install 
 ln -s Solaris_5.10_i386_hw0811 Solaris_5.10_x86_64_hw0811 
 </pre> 

 Note that hw0811 is the release name that has to match your Solaris install media. 

 h3. Solaris: 

 Create a directory and run the following script from the Solaris 10 Installation DVD on a Solaris 8 / 10 machine: 

 <pre> 
 mkdir -p /Solaris/install/Solaris_5.10_sparc_hw0811 
 cd /cdrom/cdrom0/Solaris_10/Tools 
 ./setup_install_server /Solaris/install/Solaris_5.10_sparc_hw0811 
 </pre> 

 A Solaris distribution should be declared in the same form as a Linux distribution. There should be an http based access URL, (the path variable,) so that the smart-proxy can 
 download the required components for the build. Currently this step has to be done manually. Simply copy the inetboot files to your tftp directory. 

 h3. Sparc 

 <pre> 
 cp /Solaris/install/Solaris_5.10_sparc_hw0811/Solaris_10/Tools/Boot/platform/sun4u/inetboot /var/lib/tftpboot/Solaris-5.10-hw0811-SUN4U-inetboot 
 </pre> 

 h3. i386 

 <pre> 
 cp /Solaris/install/Solaris_5.10_i386_hw0811/boot/grub/pxegrub /var/lib/tftpboot/Solaris-5.10-hw0811-pxegrub 
 </pre> 

 As the Solaris jumpstart process is performed via NFS rather than TFTP the distribution media must also be made available for 
 ReadOnly mounting on the clients. 

 h3. Linux: 

 <pre> 
 vi /etc/exports 
 "/Solaris" *(ro,async,no_root_squash,anonuid=0) 
 </pre> 

 h3. Solaris: 

 <pre> 
 share -F nfs -o ro,anon=0 /Solaris 
 echo "share -F nfs -o ro,anon=0 /Solaris" >> /etc/dfs/dfstab 
 </pre> 

 The fields describing this alternative access naming scheme are revealed on the Media page when a Solaris operating system is selected. The 
 Solaris build can proceed via a conventional package build, where the packages selected are the SUWNCreq minimal install, or a flash build. The flash archives are located under  
 the distribution directory by default but can be located anywhere that can be accessed via NFS. 

 <pre> 
 Name: Solaris Install Media 

 Path: http://server/Solaris/install/Solaris_$major.$minor_$arch_$release 
 Media Path: server:/Solaris/install/Solaris_$major.$minor_$arch_$release 
 Config Path: server:/jumpstart 
 Image Path: server:/Solaris/images 
 </pre> 

 h2. Jumpstart files 

 The Solaris jumpstart process occurs in two phases; a diskless client is first booted and then in phase two, the host mounts its build media and configuration files from an NFS location and proceeds with the build. Foreman provides a skeleton configuration directory structure suitable for NFS mounting on the host. In this structure are files that are customised to forward configuration requests to the Foreman instance. This directory tree, located at .../foreman/extras/jumpstart, should be NFS shared to the subnet that contains any potential Solaris clients. Some customization of this directory tree may be required.  

 h3. Customize dynamic_* scripts 

 An important step, as mentioned above, is to check if the dynamic_profile and dynamic_finish scripts fits your needs. 
 If your foreman host is not called "foreman" in DNS or is not reachable on port 80, you have to change the value of the "foreman" variable. 

 dynamic_profile (line #15): 
 <pre> 
 perl -p -i -e "s/hosts:.*/hosts: files dns/" /tmp/root/etc/nsswitch.conf 
 # and then download our configuration from foreman 
 foreman="foreman" 
 ./curl.$arch -s http://$foreman/unattended/provision > ${SI_PROFILE} 
 </pre> 

 dynamic_finish (line #4): 
 <pre> 
 arch=`uname -p` 
 foreman=foreman 
 # We load the finish script into the logs directory so as to leave a record 
 ./curl.$arch -s http://$foreman/unattended/finish > /a/var/sadm/system/logs/puppet.postinstall 
 </pre> 


 See [[Solaris_jumpstart_directory]] for more details 

 The files are read in the following order: 

 1. server:/jumpstart/rules.ok 
 2. server:/jumpstart/dynamic_profile 
 3. Foreman -> Provision template: Jumpstart Default 
 4. server:/jumpstart/dynamic_finish 
 5. Foreman -> Provision template: Jumpstart Default Finish 

 h3. Linux: 

 <pre> 
 cp -r /usr/share/foreman/extras/jumpstart /Solaris/jumpstart 
 vi /etc/exports 
 "/Solaris/jumpstart" *(ro,async,no_root_squash,anonuid=0) 
 </pre> 

 h3. Solaris: 
 <pre> 
 cp -r /usr/share/foreman/extras/jumpstart /Solaris/jumpstart 
 share -F nfs -o ro,anon=0 /jumpstart 
 echo "share -F nfs -o ro,anon=0 /jumpstart" >> /etc/dfs/dfstab 
 </pre> 

 h2. Edit Model 

 You need to setup a model for each Solaris Sparc host that you want to deploy. 

 <pre> 
 Name: Sun Ultra 10 
 Hardware Model: SUN4U 
 Vendor Class: Ultra-5_10 
 </pre> 

 h2. Model consolidation 

 When Foreman imports a host that has not been configured and built by Foreman it will attempt to determine the model of that machine by analyzing the facts that are associated with the host. This can often result in many badly named models all referring to what should be a single manufacturers model. A rake task has been provided that attempts to consolidate all these duplicate malformed names into a single sensible model together with the appropriate Solaris vendor class and Solaris hardware model. See [[models_consolidate|rake models::consolidate]] 

 h2. Starting deployment 

 After setting up a new host in foreman start your Intel Solaris machine and boot from PXE. On Sparc machines press STOP+a during bootup and enter the following: 

 boot net:dhcp - install 

 h1. Troubleshooting 

 h2. The installer doesnt load the jumpstart template 

 If you get an error about an empty jumpstart-template or something like "'<!DOCTYPE' invalid...", you have to fix your default_finish and default_profile scripts. 

 Remove this from default_profile: 
 <pre> 
 # and then download our configuration from foreman 
 foreman="foreman" 
 ./curl.$arch -s http://$foreman/unattended/provision > ${SI_PROFILE} 
 </pre> 
 And add this instead: 
 <pre> 
 foreman="your.foreman.host:port" 
 ipaddress=`ifconfig -a | grep -v ether | grep -v zone | grep -v groupname | grep -v flags= | grep -v 0.0.0.0 | grep -v 127.0.0. | awk '{print $2}' | tail -1` 

 ./curl.$arch -s http://$foreman/unattended/provision?spoof=$ipaddress > ${SI_PROFILE} 
 </pre> 

 Remove this from default_finish: 
 <pre> 
 foreman=foreman 
 # We load the finish script into the logs directory so as to leave a record 
 ./curl.$arch -s http://$foreman/unattended/finish > /a/var/sadm/system/logs/puppet.postinstall 
 </pre> 
 and add this instead: 
 <pre> 
 foreman="your.foreman.host:port" 
 ipaddress=`ifconfig -a | grep -v ether | grep -v zone | grep -v groupname | grep -v flags= | grep -v 0.0.0.0 | grep -v 127.0.0. | awk '{print $2}' | tail -1` 

 ./curl.$arch -s http://$foreman/unattended/finish?spoof=$ipaddress > /a/var/sadm/system/logs/puppet.postinstall 
 </pre> 

 Dont forget to replace the 'foreman="your.foreman.host:port'" with your foreman host.