Technology

HOW TO : Use templates in puppet to pass hostnames

puppet, is a configuration management framework that can be used to perform several different things to validate/configure your infrastructure. We have been using puppet for sometime at my work and have just started moving into some of the advanced uses of the tool.

One of the features offered by puppet is the capability to use templates to configure different servers.

For example, say you want to configure an application on server ABCD, XYZ and 123. And the configuration file for all these servers is the same, other than the hostname of the server. The configuration file has to reside in /opt/application/config.conf . The config.xml file looks like this

[code]

db.name=blah
db.user=blahblah
db.hostname=XYZ
log.level=ERROR
log.location=/var/log/application

[/code]

Here is how you can do it in puppet.

Define a module which uses a template and then configure the template to put the host specific entry in the template. Let’s name our module test_config

  • Create the module
    • cd $PUPPET_HOME/modules
    • mkdir test_config/{files,manifests,templates}
  • Create the template
    • cd templates
    • vi config.conf.template and add the following to the file[code]db.name=blah
      db.user=blahblah
      db.hostname=<%= fqdn %>
      log.level=ERROR
      log.location=/var/log/application [/code]
      • note : see how I replaced the hostname XYZ, which was specific to one server with <%= fqdn %>. This is one of the “facts” provided by puppet. you can get a list of all the facts by running facter on any of the puppet clients.
  • Configure the module to use the template. In this case, we want the module to place the file config.conf in /opt/application
    • cd manifests
    • vi init.pp and add the following to the file[code]class test_config {
      file { "/opt/application/config.conf":
      ensure => present,
      owner => appuser,
      group => appuser,
      mode => 755,
      content => template("test_config/config.conf.template"),
      }
      }[/code]
      • note : There are several other options you can use for the class file.. I just gave an example of some of the common ones. Like setting the owner, group and the rights.
  • Finally configure the clients to use the module. In the individual node config files, include the module you just created. Here is how the config for node ABCD would look like[code]node ABCD {
    include test_config
    }[/code]

The next time the puppet client runs on host ABCD, it would create the file /opt/application/config.conf with the right hostname in the config file.

HOW TO : Configure Jboss to append log files instead of overwriting them

If you use the default logging options for Jboss, it has a nasty habit of overwriting log files on a restart. So, if you were in the middle of troubleshooting an issue and had to restart Jboss, you will end up loosing all the historic data. You can change this default behavior by changing one option in the log4j config file

  • Edit the $JBOSS_HOME/server/$JBOSS_PROFILE/conf/jboss-log4j.xml and replace [code]<param name="Append" value="false"/>[/code]

    with [code]<param name="Append" value="true"/>[/code]

  • You don’t even have to restart Jboss for this new setting to take place, since Jboss reads the log4j config every 60 seconds and updates the logging parameters accordingly.

Project Uptime : Progress Report 7 : Putting the finishing touches

We finally come to one of the last posts of Project Uptime. Now that all the components have been setup, I finally copied the wordpress directory from my old server to the new one. The only changes, I had to make after copying the files were

  1. Configure Apache to have the wordpress folder as the default directory. I did this by changing the DocumentRoot option in the vhost
  2. Changed the permissions on the wordpress directories (so that wordpress can make rewrite rule changes on the fly)

[code]sudo chmod -v 664 $WORDPRESS_DIRECTORY/.htaccess

sudo chmod 755 $WORDPRESS_DIRECTORY/wp-content [/code]

HOW TO : Configure Jboss to use hugepages in RHEL/CentOS

Most of us worry about paging to disk (swap), but if you are running a transaction intensive application the paging that happens in RAM also starts to impact the application performance. This happens due to the size of the “block” that is used to store data in memory. Hugepages allows you to store the data in bigger blocks, hence reducing the need to page while interacting with the data.

Here is how you can enable hugepages and configure jboss (actually any Java app) to use hugepages on a RHEL/CentoOS system.

OS CONFIGURATION

  1. Check if your system is capable of supporting hugepages by running[code]grep HUGETLB /boot/config-`uname -r`[/code]

    If you see the response as below, you should be good[code]CONFIG_HUGETLBFS=y
    CONFIG_HUGETLB_PAGE=y
    [/code]

  • Next check if huge pages are already being used by running[code]cat /proc/sys/vm/nr_hugepages [/code]
  1. If the response is anything other than 0, that means hugepages have already been configured.
  • Find the block size for hugepages by running[code]cat /proc/meminfo | grep -i hugepagesize [/code]
  • Calculate the amount of memory you want to dedicate to hugepages. (note: memory allocated to hugepages cannot be used by other processes in the system, unless they are configured to use it)
  1. For example, I want to dedicate 3GB of RAM for hugepages. So the number of hugepages would be[code](3*1024*1024)/2048[/code]
  • Configure the number of hugepages on the system by editing the /etc/sysctl.conf and adding the option[code]vm.nr_hugepages = 1536[/code]

    (note: I put in 1536 since that was the value I got from the above example)

  • Restart the server and check if hugepages has been enabled by running[code]cat /proc/meminfo | grep -i huge [/code]
  1. You should see something like this[code]AnonHugePages:    839680 kB
    HugePages_Total:    1500
    HugePages_Free:     1500
    HugePages_Rsvd:        0
    HugePages_Surp:        0
    Hugepagesize:       2048 kB
    [/code]

JBOSS CONFIGURATION

  1. At this point your system is configured with hugepages and any application that is configured to use them can leverage them.  In this example, we want to configure Jboss to utilize these hugepages
  2. Add the groupid of the user that Jboss is running under to the /etc/sysctl.conf file. In my case, the jboss user group had a GID of 505, so I added this line to /etc/sysctl.conf[code]vm.hugetlb_shm_group = 505 [/code]
  3. Next allocate the memory to the user by editing /etc/security/limits.conf and allocating the memory. Again, in my case, I added the following to /etc/security/limits.conf[code]# Allocate memory for Jboss user to take advantage of hugepages
    jboss   soft    memlock 1500
    jboss   hard    memlock 1500
    [/code]
  4. Finally add the following to the Jboss startup parameters. I edited the $JBOSS_HOME/bin/run.sh file. (note: the startup file can be different based on your config) with the option[code] -XX:+UseLargePages[/code]
  5. Restart Jboss and you are good to go

note : A lot articles that I read online say that hugepages are effective when you are allocating large amounts of RAM to the application. The use case of just using 3GB above was just that.. a use case.

While I cannot personally vouch for it, a lot of users have noted that they saw >2 fold increase in performance.

HOW TO : Sync git clients across workstations using dropbox

I have recently started using git as a source control for the various scrips that I write. As I also mentioned in this post, I use dropbox to synchronize my data across workstations. Here is my setup for synchronizing git clients across multiple workstations using the same SSH keys (note: this is not a recommended setup from a security prospective. you are recommended to generate different SSH key pairs per workstation to ensure one key getting lost doesn’t compromise your entire account).

  1. Workstation 1
    1. create a directory under your dropbox root, that you want to use as your git home directory. Say DROPBOX/git
    2. Install Git for Windows, or whatever git client you want to use
    3. Change the home path on the git client by executing [code]HOME=’PATH_TO_DROPBOX/DROPBOX/git’ [/code]
    4. Check if the home path has been changed by executing [code]echo $HOME[/code]
    5. Create your SSH keys and configure your public key on the git server
  2. Workstation 2
    1. Repeat and rinse step 1 – 4 specified for workstation 1. You don’t need to create the SSH keys since the other clients will recognize the keys that dropbox would have synced up.

Project Uptime : Progress Report 6 : Tweaking Varnish

The server has held up pretty well, since the installation of varnish. Based on this wiki post, I added the following to /etc/varnish/default.vcl

[code]
<pre>
# Drop any cookies sent to WordPress.
sub vcl_recv {
if (!(req.url ~ "wp-(login|admin)")) {
unset req.http.cookie;
}
}

# Drop any cookies WordPress tries to send back to the client.
sub vcl_fetch {
if (!(req.url ~ "wp-(login|admin)")) {
unset beresp.http.set-cookie;
}
}
[/code]

I think the comments are pretty self explanatory.

Thank you Mr.Tramiel, for introducing me to the world of computers..

When I was ~9 years old, my dad bought home a Commodore 64K. It was slow.. it was terrible graphics and it took for ever to load a program using it’s “tape” drive. But boy was it fascinating to load up basic and write your own programs!!. I can’t say how many summer hours were spent staring at the screen and trying to get things to work.

Looking back, I can say that I probably wouldn’t have been in the technology field, if not for that first taste of computing.

Thank you Mr.Tramiel. RIP.

HOW TO : Configure Jboss to send log messages to syslog

Jboss uses the log4j framework for providing logging services. log4j is a very flexible framework and can do a lot of things. One of the features provided by log4j is to send log messages to multiple destinations. Here is a quick how to on configuring Jboss to send log messages using the syslog protocol to a syslog server. This is pretty useful, when you are trying to consolidate logs from multiple sources into a central location.

First, some background about how log4j is configured in Jboss

The log4j configuration in Jboss is managed by the file jboss-log4j.xml located at $JBOSS_HOME/server/$JBOSS_PROFILE/conf.

There are three parts to this configuration file

  1. Appenders
    • An appender is a way to define a particular logging method. By default, Jboss provides a bunch of appenders in this config file, but only the FILE and CONSOLE appenders are enabled. The FILE appender writes the log messages to a log file and rotates them based on the criteria in the appender. The CONSOLE appender just sends messages to the console. This will come into picture, when you are not running Jboss as a service. In addition, there are appenders for syslog, snmp, email that are commented out.
  2. Categories
    • A category is where you define the class you want to log  messages for and which appender it should use. If you don’t specify an appender or the threshold for the logging level, logging for this class will be done at the default log levels and by the appender specified by the default (root) category.
  3. Default (root) Category
    • As mentioned above, this is the catch all for classes that are not specified specifically in the categories section.

So pictorially, it would look like this

Getting back to the reason for this post, here is how you would enable the syslog appender and then configure a category to use this appender. For this example, we will use a class names org.kudithipudi

  1. Enable the syslog appender by un-commenting the following section in the jboss-log4j.xml file[code]   <!– Syslog events –>
    <appender name="SYSLOG">
    <errorHandler/>
    <param name="Threshold" value="ERROR"/>
    <param name="Facility" value="LOCAL7"/>
    <param name="FacilityPrinting" value="true"/>
    <param name="SyslogHost" value="localhost"/>
    <layout>
    <param name="ConversionPattern" value="[%d{ABSOLUTE},%c{1}] %m%n"/>
    </layout>
    </appender>
    [/code]
  2. Add a new category to use this appender [code]   <category name="org.kudithipudi">
    <priority value="INFO" />
    <appender-ref ref="SYSLOG"/>
    </category> [/code]
  3. Restart Jboss and you should see messages from Jboss being sent to the syslog server

Couple of notes..

  • Even though we are specifying the threshold of INFO in the category, because we specified a threshold of ERROR in the appender, only message of ERROR type will be sent to the syslog server. This is actually pretty useful when you want to specify two appenders to a category and log them at different levels. You can set another appender to INFO level and add it to this category. And in essence, the appender will log everything of INFO and higher, while the syslog appender will only process ERROR messages.
  • The destination for the syslog messages is the SysLogHost parameter. In this example, I just used localhost.

Project Uptime : Progress Report 5 : Getting ready for Reddit and Hacker News

A very timely post on Hacker News by Ewan Leith about configuring a low end server to take ~11million hits/per month gave me some more ideas on optimizing the performance of this website. Ewan used a combination of nginx and varnish to get the server to respond to such traffic.

From my earlier post, you might recall, that I planned on checking out nginx as the web server, but then ended up using Apache. My earlier stack looked like this Based on the recommendations from Ewan’s article, I decided to add Varnish to the picture. So here is how the stack looks currently

And boy, did the performance improve or what. Here are some before and after performance charts based on a test run from blitz.io. The test lasted for 60 seconds and was for 250 simultaneous connections.

BEFORE

  • Screenshot of Response times and hit rates. Note that the server essentially stopped responding 25 minutes into the test.
  • Screenshot of the analysis summary. 84% error rate!!

AFTER

  • Screenshot of response times and hit rates
  • Screenshot of summary of Analysis. 99.98% success rate!!

 

What a difference!!.. The server in fact stopped responding after the first test and had to be hard rebooted.  So how did I achieve it? By mostly copying the ideas from Ewan :). The final configuration for serving the web pages looks like this on the server end

Varnish (listens on TCP 80) –> Apache (listens on TCP 8080)

NOTE : All the configuration guides (as with the previous entries of the posts in this series) are specific to Ubuntu.

  1. Configure Apache to listen on port 8080
    1. Stop Apache [code] sudo service apache2 stop [/code]
    2. Edit the following files to change the default port from 80 to 8080
      1. /etc/apache2/ports.conf
        1. Change [code]NameVirtualHost *:80
          Listen 80
          [/code]
        2. to [code]NameVirtualHost *:8080
          Listen 8080
          [/code]
      2. /etc/apache2/sites-available/default.conf (NOTE: This is the default sample site that comes with the package. You can create a new one for your site.  If you do so, you need to edit your site specific conf file)
        1. Change [code] <VirtualHost *:80> [/code]
        2. To [code]<VirtualHost *:8080> [/code]
    3. Restart apache and ensure that it is listening on port 8080 by using this trick.
  2. Install Varnish and configure it to listen on port 80
    1. Add the Varnish repository to the system and install the package[code]sudo curl http://repo.varnish-cache.org/debian/GPG-key.txt | apt-key add –
      sudo echo "deb http://repo.varnish-cache.org/ubuntu/ lucid varnish-3.0" >> /etc/apt/sources.list
      sudo apt-get update
      sudo apt-get install varnish
      [/code]
    2. Configure Varnish to listen on port 80 and use 64Mb of RAM for caching. (NOTE: Varnish uses port 8080 to get to the backend, in this case Apache, by default. So there is no need to configure it specifically).
      1. Edit the file /etc/default/varnish
        1. Change [code]DAEMON_OPTS="-a :6081 \
          -T localhost:6082 \
          -f /etc/varnish/default.vcl \
          -S /etc/varnish/secret \
          -s malloc,256m"
          [/code]
        2. To [code] DAEMON_OPTS="-a :80 \
          -T localhost:6082 \
          -f /etc/varnish/default.vcl \
          -S /etc/varnish/secret \
          -s malloc,64m"
          [/code]
    3. Restart Varnish [code]sudo service varnish restart[/code]

      and you are ready to rock and roll.

There are some issues with this setup in terms of logging. Unlike your typical web server logs, where every request is logged, I noticed that not all the requests were being logged. I guess, that is because varnish is serving the content from cache. I have to figure out how to get that working. But that is for another post :).

HOW TO : Perform OCR on PDF files for free

I had to convert a scanned PDF file into an editable document recently. You can do this using OCR and there is a ton of software out there, that does this. There are even web based services that do this. But each of them had limitations (either had to buy the software or limit in the number of pages that can be scanned). I didn’t want to buy the license, since this is not something I would be doing regularly and the document I had to convert was 61 pages, so none of the online services allowed me to do it. I remembered reading that Google Docs, added this (OCR) capability a while ago and since I have a Google Apps account, I decided to give it a try.

Google also has a limit of 2 pages per OCR conversion. So after some brainstorming, I came up with this quick hack to use Google Docs for converting large PDF files into editable content.

  1. Split the PDF file into two page documents using PDFsam (Open Source PDF Split and Merge Tool).
  2. Log into your Google Docs interface at http://docs.google.com . All you need is a Google Account to use this feature
  3. Create a folder (collection) to organize your files. This is not required, but it will make searching for the files a lot easier
  4. Check the settings to convert PDF files to editable
  5. Upload the PDF files you created in step 1.
  6. As you upload the files, Google creates an editable document with the text from the PDF files. You can then create a new document and copy/paste the content from all the smaller files.

I think someone with more programming chops than me can improve this by using the Google API to do the copy/paste from the smaller docs into the final document :).