HOWTO

HOW TO : Query varnishlogs for requests with 404 responses

varnishlog, one of the tools provided with varnish cache, uses VSL Query Expressions (https://www.varnish-cache.org/docs/trunk/reference/vsl-query.html) to provide some powerful insights into the requests and responses.

Here is a how you can use varnishlog to show all client requests that are ending up with a 404 response.

sudo varnishlog -g request -i ReqURL -q "BerespStatus != 200"

Technically, this particular query shows all client requests with a response other than 200.

Breaking down the commands

-g request : shows all entries related to the request

-i ReqURL : forces varnishlog to only display the Requesting URL

-q “BerespStatus != 200” : query filter to only match non 200 responses. Note that the query has to be enclosed in “”.

HOW TO : Enable wildcard domains in Squid

We were trying to modify some ACL (access control lists) in squid to allow traffic to certain websites. Instead of adding each individual hostnames in a domain, we wanted to add all traffic to a certain domain.

Document on the interwebs is old or not clear on how to achieve this.

After some trial and error, here is what works

say you want to allow all traffic to the google.com domain, you create a access list using dstdomain like below

acl name_of_acl dstdomain .google.com

The “.” before the domain name acts as a wildcard

Then you use the acl to allow http access to it like below

http_access allow name_of_acl

HOW TO : pipe results between commands when using sudo

Let’s say you are running a command as sudo and need to pass the output to a different command using pipe, you would run

sudo command1 | command 2

this usually results in the following error

-bash: /command2: Permission denied

The trick to fix is to run sudo with -c and enclose the commands in ” like below

sudo -c 'command1 | command 2'

essentially you are opening a shell with sudo and running the commands

HOW TO : Capture network traffic on a Solaris server

If you don’t have tcpdump installed on your solaris server, you can use the “snoop” system command to capture network traffic.

Here is the command line option to capture 1000 packets of network traffic from IP 192.168.10.10 on a solaris server using inteface e1000g1 and write the output to /tmp/capture.pcap

snoop -d e1000g1 -c 10000 -o /tmp/capture.pcap host 192.168.10.10

Details of the command options

  • -d : Name of the interface you want to capture traffic on
  • -c : Number of packets you want to capture
  • -o : Path to the output file
  • host : IP address of the host you want to capture traffic from and to

More details at https://docs.oracle.com/cd/E23824_01/html/821-1453/gexkw.html

PS : You have to have root privileges to run this command.

HOW TO : Use awk to print values larger than certain number

Quick how to on using awk to filter results if a certain value (column) is larger than a set value.

For example, if you have a file (servers.txt) with lines in this format

a_datacenter, servers 20
 error, servers xyz
 b_datacenter, servers 21
 c_datacenter, servers 50

and you want to show only the lines that have server value larger than 20, you can do this in awk by running

grep datacenter servers.txt | awk '$3 > 20  {print ;}' | more

breaking down the commands

grep – parsing down the output to just show the lines containing datacenter

awk – $3 > 20 : Get the third variable (awk seperates text using spaces by default) and check if it is greater than 20

print – print the entire line

HOW TO : Search for a record in MongoDB based on length

Quick entry for my own records.

MongoDB is one of the popular open source document database that is part of the nosql movement. One of the applications we deployed at work uses MongoDB as an internal storage engine. We ran into an issue where MongoDB was trying to replicate data to MySQL and the replication stopped because of a size mismatch for an object between MongoDB and MySQL. Essentially MongoDB was trying to insert a record into MySQL that was larger than the defined length.

Here is the query we used to find the culprit objects. We used the awesome Robomongo client to connect to the MongoDB instance.

[code]db.some_table_to_search.find({$where:"this.some_column_to_search.length > 40"})[/code]

Breaking down the command

db -> Specifies the database you are trying to search

some_table_to_search -> Specifie the table you are trying to search

some_column_to_search -> Specified the particular column you are trying to search.

In this specific example, we were looking for entries longer than 40 characters for this column.

If you come from the traditional RDBMS world, here is a link from MongoDB comparing terminology between RDBMS and MongoDB.

http://docs.mongodb.org/manual/reference/sql-comparison/

HOW TO : Convert PFX/P12 crypto objects into a java keystore

We needed to add a certificate that is currently in PKCS#12 format currently into a java keystore at work recently. The typical step would be due to create an empty keystore and then import the certificate from the PKCS#12 store using the following command

[code]keytool -importkeystore -srckeystore sourceFile.p12 -srcstoretype PKCS12 -destkeystore destinationFile.jks[/code]

Note: PKCS#12 files can have extensions “.p12” or “.pfx”

The command executed without any issues, but we received the following error when we started the application server using this newly created keystore

[code]java.io.IOException: Error initializing server socket factory SSL context: Cannot recover key [/code]

It didn’t make sense, because we were able to view the certificate in the keystore and were using the right password in the configuration files.

After a lot of searching and head scratching, the team came up with the following solution

  1. Export the public key and private key from the PKCS#12 store using openssl.
  2. Import these keys into the java keystore (default format of JKS)

The commands used were

[code]
openssl pkcs12 -in sourcePKCS12File.p12 -nocerts -out privateKey.pem
openssl pkcs12 -in sourcePKCS12File.p12 -nokeys -out publicCert.pem
openssl pkcs12 -export -out intermittentPKCS12File.p12 -inkey privateKey.pem -in publicCert.pem
keytool -importkeystore -srckeystore intermittantPKCS12File.p12 -srcstoretype PKCS12 -destkeystore finalKeyStore.jks
[/code]

HOW TO : Use grep and awk to find count of unique entries

I have use grep extensively before to analyze data in log files before. A good example is this post about using grep and sort to find the unique hits to a website. Here is another way to do it using grep and awk.

Say the log file you are analyzing is in the format below and you need to get the unique number of BundleIDs

[code]2013-02-25 12:00:06,684 ERROR [com.blahblah.sme.command.request.CustomCommand] Unable to execute AssignServiceCommand, request = ‘<AssignServiceToRequest><MemberId>123456</MemberId><OrderBundle><BundleId>5080</BundleId></OrderBundle></AssignServiceToRequest>'[/code]

you can use grep and awk to find the number of times a unique bundleID appears by running

[code]grep -i bundleID LOG_FILE_NAME | awk ‘{ split ($11,a,">"); print a[6]}’ | sort | uniq -c | sort -rn [/code]

breaking down the commands

grep -i : tells grep to only show the lines from the file (LOG_FILE_NAME) containing the text bundleID and makes the search case insensitive

awk ‘{ split ($11,a,”>”); print a[6]}’ : tells awk to grab the input from grep and take the 11th item (by default awk separates content with a space) and split the string into an array (a) using > as a delimiter. And finally print out the value of the array a’s sixth member

sort : sorts the output from awk into ascending order

uniq -c : takes the output from sort and counts uniq items

sort -qn : takes the output from uniq and does a reverse order sort

The output looked like this

[code]
173 5080</BundleId
12 5090</BundleId
8 2833</BundleId
1 2412</BundleId
1 2038</BundleId
1 1978</BundleId
1 1924</BundleId
[/code]