Technology

HOW TO : Capture HTTP Headers using tcpdump

Quick how to on capturing HTTP headers using tcpdump on a web server (running Linux).

    • On the web server, issue the following command

      [bash] tcpdump -s 1024 -C 1024000 -w /tmp/httpcapture dst port 80 [/bash]

        • Stop the capture by issuing the break command (ctrl + c)
        • Open the capture file (httpcapture in this example) in wireshark and check out the headers under the  the HTTP protocol

        HOW TO : Configure Cache Expiration in Apache

        Cache servers depend on cache control headers provided by the web server. Essentially, the web server (based on the configuration) specify’s what content is cache-able and for how long. (Note: Some of the cache servers might ignore this and have a default cache period for specific content. But that is not for another post 🙂 )

        Here is a quick and dirty way to configure Apache 2.x server to enable cache control settings on all content in a directory

        [bash]
        ExpiresActive On
        <Directory "/var/www/html/static">
        Options FollowSymLinks MultiViews
        Order allow,deny
        Allow from all
        ExpiresDefault "modification plus 1 hour"
        </Directory>
        [/bash]

        This configuration tells apache to enable cache headers for all content in the /var/www/html/static folder. The cache expiration is set to expire 1 hour from the modification time of the content.

        Analytics in the Cloud : Not there yet

        I attended a webinar hosted by Deepak Singh from Amazon’s Web Service group on analytics in the cloud. He made a very compelling case for utilizing the cloud to build out your analytics infrastructure. Esp with the growing data sizes that we deal with now, I think it makes absolute sense. You can utilize different software stacks and grow (and shrink) your hardware stack as required. Great stuff..

        But there is a catch. Most of the data generated by current organizations is “inside” their perimeters. Whether it is the OLAP database collecting all your data or that application that spews gigabytes of logs, most of the data is housed in your infrastructure. So if you want to use the cloud to perform analytics on this data, you have to first transfer this data to the cloud. And therein lies the problem. As Deepak mentioned in the webinar, human beings have to yet conquer the limitations of physics :).  You have to have a pretty big pipe to the Internet to just transfer this data.

        Amazon has come up with various means to help with this issue. They are creating copies of publicly available data sets within their cloud so that customers don’t have to transfer them. They are also working with companies to keep private data sets in the cloud for other customers to use. So similar to how you would be able to spin up a Redhat AMI, by paying some license fee to Redhat, I believe they are looking at providing customers access to this private data sets by paying some fee to the company providing this data set. It is a win-win-win situation 🙂 for Amazon, the company providing the private data set and Amazon’s web services customers. They also support a one time import of data from physical disk or tape.

        Coming back to the title of this post :). I think this field is still in it’s infancy. Once companies start migrating their infrastructure to the cloud (And yes, it will happen. It is only a matter of time :).), it will be a lot easier to leverage the cloud to perform your analytics. All your data will be in the cloud and you start leveraging the hardware and software stacks in the cloud.

        LinkedIn Network Map

        LinkedIn (professional networking site) is providing a way to map your networks to see where you have your strongest connections. Here is a map of my networks. You can click on the image to get to the live map.

        My strongest connections so far are at

        I wish they came up with a map showing the location of my network too. That way, I can find out if I can get a job in New Zealand through my network :).

        HOW TO : Combining Perl and Zoho to produce reports

        This HOW TO is more for my notes. We had a request at work, where we had to parse some log files and create a graph from the data in the log files.

        The log files looked like this

        [bash]
        0m0.107s
        0m0.022s
        0m0.015s
        2011-01-05_02_22
        0m0.102s
        0m0.024s
        0m0.014s
        2011-01-05_02_23
        [/bash]

        I wrote the following perl script to get the log file to look as such

        [bash]| 0m0.107s| 0m0.022s| 0m0.015s| 2011-01-05 | 02:22

        | 0m0.102s| 0m0.024s| 0m0.014s| 2011-01-05 | 02:23 [/bash]

        perl script

        [perl]
        #!/usr/bin/perl
        # Modules to load
        # use strict;
        use warnings;

        # Variables
        my $inputFile = ‘input.txt’;
        my $version = 0.1;

        my $logFile = ‘parsed_input.csv’;

        # Sub Functions
        sub Log($$$);
        sub Trim($);

        # Clear the screen
        system $^O eq ‘MSWin32’ ? ‘cls’ : ‘clear’;

        # Open the output log file
        open(LOGFILE,"> $logFile") || die "Couldn’t open $logFile, exiting $!\n";

        # Open the input file
        open(INPUTFILE,"< $inputFile") || die "Couldn’t open $inputFile, exiting $!\n";

        # Process the input file, one line at a time
        while (defined ($line = <INPUTFILE>)) {
        chomp $line;
        # Check for blank line
        if ($line =~ /^$/)
        {
        # Start a new line in the output
        print LOGFILE "\n";
        }
        else
        {
        # Split the date and time
        if ($line =~ /2011/)
        {
        @date = split (/_/,$line);
        print LOGFILE "| $date[0] | $date[1]:$date[2]";
        }
        else
        {
        # Write the value to the output
        print LOGFILE "| $line";
        }
        }
        }
        [/perl]
        I then took the parsed log files and imported them into the cloud based reporting engine provided by Zoho at http://reports.zoho.com

        The final result are these reports

        SERVER1

        SERVER2

        Did I say, I love technology? 🙂

        HOW TO : Find out which network port a program is using in linux

        Quick way to figure out, which ports a particular program is using in linux

        [bash] netstat -plan | grep -i PROGRAM_NAME [/bash]

        Example : Check which ports SSH is listening on

        [bash]

        samurai@samurai:~$ sudo /bin/netstat -plan | grep sshd
        tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      5257/sshd
        tcp        0     52 123.123.123.123:22      124.124.124.124:32846     ESTABLISHED 3551/sshd: samurai
        tcp6       0      0 :::22                   :::*                    LISTEN      5257/sshd
        unix  3      [ ]         STREAM     CONNECTED     5893     3551/sshd: samurai
        unix  2      [ ]         DGRAM                    5849     3551/sshd: samurai

        [/bash]

        HOW TO : Manage startup services in Ubuntu

        Most Redhat/Fedora users are used to chkconfig and service for controlling the services/programs that startup at boot time. Here is how you do it in Ubuntu

        • Check status of a particular service

        [bash] sudo SERVICE_NAME status [/bash]

        Example : Check the status of Apache Web Service

        [bash]samurai@samurai:~$ sudo service apache2 status
        Apache is running (pid 3496).[/bash]

        • Add a service to start on bootup

        [bash] update-rc.d SERVICE_NAME add [/bash]

        Example : Configure squid to start on bootup

        [bash] update-rc.d squid add [/bash]

        • Stop a service from starting on bootup

        [bash] update-rc.d SERVICE_NAME remove [/bash]

        Example : Configure squid to NOT start on bootup

        [bash] update-rc.d squid remove [/bash]

        NOTE : You need to have a startup script in /etc/init.d for the service to ensure update-rc.d works fine.

        HOW TO : Check IO speed on a Linux Machine

        For my notes.. if you ever want to check the IO capability of a disk (local or network) on a linux machine, use the following command

        [bash] dd if=/dev/zero of=test.file bs=4M count=1000 [/bash]

        The above command make a copy of the output from /dev/zero to a file called test.file (you can locate the file on the disk you want to measure) with a block size of 4M for a total file size of 4000Mb.

        Cloud Computing and your company's infrastructure

        Bold forecast :).. But in 5 to 10 years, I predict the majority of a company’s infrastructure will be hosted in a “cloud”. If you recall (circa 2000..), most of the companies were hosting “anti-spam” services in house. If anyone suggested that we can outsource that service, you would get a “are-you-crazy” look :). And now, you will get the same look if anyone suggests they run the anti-spam service in house. I believe the same is going to happen for infrastructure. You might still be running some components in house, but it will get smaller and smaller. Companies will be forced to focus on their core competency rather than try to maintain an army of engineers to perform tasks that someone else might be a lot better at.

        Speaking of being visionary, apparently Netflix operates most of their infrastructure in the cloud. If Netflix can operate in the cloud, a majority of us can too :). Here are some links regd their lessons from moving to a cloud.

        http://blip.tv/file/4252897 (Video of Netflix Director of Engineering explaining their move to the cloud)

        https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxwcmFjdGljYWxjbG91ZGNvbXB1dGluZ3xneDo2NDc2ODVjY2ExY2Y1Zjcz&pli=1 (Write up by a Netflix engineer about the move to the cloud from a storage and DB prospective)