h1. Frequently Asked Questions
h1. h2. I'm not using Storeconfigs, how can I still use Torque?
Torque does not require puppet storeconfigs, however, Torque can work with puppet db schema natively as Torque extends it.
If you just want to import hosts inventory (facts), you can use the rake task:
<pre>RAILS_ENV=production rake puppet:import:hosts_and_facts</pre>
This will import your exists facts yaml file (defaults to vardir/yaml/facts), if you wish to import from another directory use:
<pre>RAILS_ENV=production rake puppet:import:hosts_and_facts dir=/my/dir/with/yaml/files</pre>
*NOTE:* its probably a good idea to clean up your yaml file directory, as you might have a lot of old data in there.
h1. h2. I'm using Storeconfigs, how can I populate various settings in Torque that are required for hands free (unattended) installations?
<pre>RAILS_ENV=production rake puppet:migrate:populate_hosts</pre>
This will try to auto-generate all operating systems, puppet environments etc into Torque's DB.
h1. h2. How do I use unattended installations (Kickstart, jumpstart, preseed)?
Torque automates network boot processes using PXEboot (or native Solaris net:dhcp)
At this time, Torque does not support DHCP and DNS alteration, you would need to do those steps manually.
h2. h3. TFTP
Torque has currently limited support for TFTP- that means it require the TFTP server to be accessible via the local file system.
future versions of Torque would allow remote TFTP servers as well.
make sure you add into your config/settings.yml
<pre> :tftppath: /var/lib/tftpboot/pxelinux.cfg</pre>
replace the value with your actual TFTP directory and ensure that the *user which executes Torque have write access*.
h3. h4. How does Torque manages TFTP?
When clicking on the Build button (in the host list), Torque would generate a link which will be automatically read by pxelinux,
This link would point to a predefined syslinux(pxelinux) boot file which would be based on the Operating System used.
after a successful OS installation, that link will be removed, and your default PXE settings will be served.
The idea behind it is to set the boot order on each host to always boot from network, and then change the settings via Torque.
This avoids the need to press F12 on each machine just to reinstall it, Clicking on Build in Torque will trigger a host re-installation upon the next server reboot if default boot order is PXE
an example of a such a file for CentOS 5 32bit:
append initrd=boot/centos-5-32.initrd.img ks=http://ginihost/unattended/kickstart ksdevice=eth0 network kssendmac
and another example for Ubuntu 9.04 32bit
append initrd=boot/ubuntu-9.04-32.initrd.gz ramdisk_size=10800 root=/dev/rd/0 rw auto preseed/url=http://ginihost/unattended/preseed console-keymaps-at/keymap=us locale=en_US interface=eth0 DEBCONF_PRIORITY=critical netcfg/dhcp_timeout=60 --
h2. h3. Whats inside the Kickstart / jumpstart /preseed ?
These files are all generated dynamically based on the setting of each host in Torque, things like partition tables and root password can be unique per server.
if you want to see the kickstart/preseed etc you may use the spoof parameter, just point your browser to:
* 123.321.123.321 is the hosts IP Address (the one you want to build).
* usually you want to see the page source, the browser might display the file in html which will result in hard to read output.
* if you are using passenger please remove the ":3000" from the URL.
h2. h3. Modifying the unattended template
You probably want to do minor teaks to your kickstart/jumpstart/preseed template (yeah the same kind puppet uses).
the template for can be found at:
RedHat based installation
and a finish script
h2. h3. PuppetCA
Torque will enable host autosign during provisioning time, that means, the user which executes Torque must have:
1. write access to /etc/puppet/autosign.conf
2. sudo access to run puppetca
Once a host (which is enabled for build) is requesting a kickstart/jumpstart etc than an entry would be created in the autosign.conf file.
Each operating system will run puppetd after the OS installation but before the first reboot, this will acquire the puppet certificate, then the host will notify Torque that it has finished the installation, and Torque will remove the entry from the autosign file automatically.
h1. h2. How do I use Torque with Puppet external nodes?
Torque Can provide parameters and class information to Puppet
Each host can be associated with multiple classes and can have multiple parameters (available via the edit hosts button).
under the <b>extras/externalnodes</b> directory you would find an example script to query Torque DB.
You would need to setup puppet to use external nodes
<pre><code> external_nodes = /etc/puppet/node.rb
node_terminus = exec</code></pre>
For additional info please see "Puppet documentation":http://reductivelabs.com/trac/puppet/wiki/ExternalNodes
*You may also click on the YAML link to see the output that would be used for puppet external nodes.
h2. h3. How to import my current classes from Puppet?
Torque can import all of the puppet classes directly from puppet just:
h1. h2. Where is the DB?!
By default, Torque will use sqlite3 as a database, its configuration can be found at
By default, the database can be found at the db subdirectory.
Torque is a rails application, therefor, anything that is supported under RAILS (sqlite, mysql, postsql, oracle etc) can be used.
h1. h2. What about other operating systems?
Torque currently supports RedHat/Fedora, Debian/Ubuntu and Solaris Jumpstart
it has been successfully tested on CentOS 3,4,5 Fedora 10-11, Ubuntu 9.04 and Solaris 8-10 on Sparc.
If you have any other operating system you would like to see added to Torque, please contact us and we would be happy to add it.
for Jumpstart support, as Solaris doesn't support nativity accessing the profile data dynamically, its required to create some workarounds
example for those are found at the *extras/jumpstart* directory.
You may find the dynamic profile and dynamic finish at the following URL's:
It is also required to add vendor options to your dhcp server if you plan to boot from network on the sparc platform.