quick note for self. If you are capturing traffic using tcpdump, you can rotate the capture files based on size
sudo tcpdump -i INTERFACE_TO_CAPTURE_TRAFFIC_ON -C 10 -s0 -W NO_OF_FILES_TO_ROTATE_THROUGH -w /PATH_TO_CAPTURE_FILE
explanation of the options used
-i : specify the interface you want to capture the traffic on. If not specified, tcpdump will listen on the lowest numbered interface. i.e. eth0
-C : specify the size of the file multiplied by 1000000 bytes. In this example, the file created would be 10000000 bytes. Or ~9.8MB
-s : specify the packet length to capture. 0 (zero) tells tcpdump to capture the entire packet
-W : specify the number of files to rotate through once the files size specified in -C is reached. The files keep rotating throughout the capture
-w : Specify the path to the capture file. tcpdump appends an integer to the end of the file based on the number of files it has to rotate through.
If you are using the mod_proxy feature in Apache to forward requests for certain content to a backend server, but want to restrict access to that content to clients originating from certain IP addresses, you can use the location feature in Apache.
The Location directive limits the scope of the enclosed directives by URL. This is very similar to the Directory directive, but the difference is that you can put controls based on the URL rather than the location of the content.
In this example, I am forwarding content destined to http://kudithipudi.org/testLocation to an internal server at http://127.0.0.1:8080/testLocation. I am going to use the Location directive to restrict access to just requests originating from IP Address 10.10.10.10
Deny from all
Allow from 10.10.10.10
ProxyPass /testLocation http://127.0.0.1:8080/testLocation
ProxyPassReverse /testLocation http://127.0.0.1:8080/testLocation
Simple one liner to check if your web server is using strong ciphers
openssl s_client -cipher LOW -host SERVER_NAME -port 443
I was listening to this week’s edition of Steve Gibson’s Security Now podcast and Steve talked about a unique way of using DNS. His spintrite application uses DNS to check for the latest version of the application. Most applications use http to check version information. This might pose a problem in environments with proxy servers. DNS traffic on the other hand is generally allowed in most environments. He says his application does a DNS lookup for something like application.version.grc.com and the “IP” address that is returned denotes the major and minor versions of the code. And depending on the response, the application will prompt with a “need to update” message.
Here’s a more technical post way back from 2006 by Jan-Piet Mens on the same subject
Quick one liner for capturing traffic destined to and arriving from a host (IP address) using tcpdump and writing it to a file for analyzing later on
tcpdump -s0 host x.x.x.x -w destination.pcap
Things have been a bit hectic at work.. so didn’t get a lot of time to work on this project. Now that that the new server has been setup and the kernel updated, we get down to the mundane tasks of installing the software.
One of the first things I do, when configuring any new server is to restrict root user from logging into the server remotely. SSH is the default remote shell access method nowadays. Pls don’t tell me you are still using telnet .
And before restricting the root user for remote access, add a new user that you want to use for regular activities, add the user to sudo group and ensure you can login and sudo to root as this user. Here are the steps I follow to do this on a Ubuntu server
Add a new user
Add user to sudo group
usermod -G sudo -a xxxx
Check user can sudo to gain root access
sudo su - xxxx
Now moving into the software installation part
sudo apt-get install mysql-server
you will be prompted to set the root user during this install. This is quite convenient, unlike the older installs, where you had to set the root password later on.
sudo apt-get install php5-mysql
In addition to installing the PHP5-mysql, this will also install apache. I know, I mentioned, I would like to try out the new version of Apache. But it looks like Ubuntu, doesn’t have a package for it yet. And I am too lazy to compile from source .
With this you have all the basic software for wordpress. Next, we will tweak this software to use less system resources.
Back in 2009 (last decade!! ), I wrote a blog post on how you can trick windows to route traffic destined to a particular IP address to a black-hole. In it, I mentioned the command to route traffic to /dev/null in Linux was
<code>route ADD IP_ADDRESS_OF_MAIL_SERVER MASK 255.255.255.255 127.0.0.1</code>
I ran into a need to try it today and looks like the trick doesn’t work . So here is the right command if you want to route traffic to the loopback (or blackhole) destined to a particular IP address
sudo route add -host IP_ADDRESS_OF_HOST/NETWORK_MASK lo
For example if I want to black-hole traffic destined to 126.96.36.199, I would do the following
sudo route add -host 188.8.131.52/32 lo
If you want to check the SSL certificate validation (expiry time, hostname match, self signed etc) using curl, you can do it by running
curl -cacert URL_ADDRESS
Example : If you want to check the SSL certificate of GoDaddy
curl -cacert https://www.godaddy.com
The uptime of this blog has been really bad recently. I switched to hosting it on a Rackspace virtual server last year and went with the cheapest option. A 256MB Linux virtual server that was costing me ~$12/month. I never got around to tuning the OS, so the server was always using swap and would go down pretty much every day. Last week, I upgraded the plan and moved to a 512MB server. But the uptime hasn’t been any better. Here’s a report from Pingdom (which by the way is a great service to track the uptime and responsiveness of your website) showing the availability of the site over the last year 96%!!.. And for someone that has been working in the operations and infrastructure world, that is unacceptable . So my new goal is to maintain at least 99.5% uptime. Here is my plan to achieve this
- Move to a fresh VM with the latest kernel
- Upgrade to the latest version of Apache. Initially, I wanted to move to nginx or lighttpd, but with the recent Apache upgrade, I hear good things about Apache working well in low memory situations.
- Upgrade to latest version of MySQL and tune it for memory usage
- Configure cloudflare to serve a static version of front page, in case the server goes down. Design the static page to point people to my other digital presences (Google+, LinkedIn, Flickr etc)
I plan to blog the progress and learnings as I implement this plan.
Quick how to for my personal records. iptables is an open source firewall (and it does a lot more) included with most linux distributions.
Steps to add new rule to existing configuration
- Check the list of rules and their corresponding sequence
sudo iptables -vL --line-numbers
- Add the new rule at the required location/sequence
sudo iptables -I INPUT LINE_NUMBER RULE
iptables -I INPUT 8 -s X.X.X.X/24 -p tcp -m state --state NEW -m tcp --dport 3128 -j ACCEPT
sudo serivce iptables save
Thx to Sijis for helping with the commands.