Possum Wasp

So last night I attempted to fight off an urban possum with a can of wasp spray. It was all I had and the possum didn’t exactly run off and instead walked away sort of angry looking. Now if my poor choice of weaponry caused some angry mutation in downtown Houma you have my apologies.


Spying on a directory with auditd

Files start coming up missing for me on a server and I get freaked out looking for security holes, but sometimes users and other utilities are spiking the bunch bowl. You can get serious with watching files with other utilities, but I went back to good ole auditd.

A simple test to track stuff getting trashed from an upload folder:

[code]auditctl -w /site-dir/wp-content/uploads/ -p wa -k upload_issue[/code]

A capital W will remove the rule:

[code]auditctl -W /site-dir/wp-content/uploads/ -p wa -k upload_issue[/code]

Do a quick search for issues with ausearch.

[code]ausearch -f wp-content/uploads[/code]

Now permanently add the rule on a redhat system by putting this line in /etc/audit/audit.rules. Just leave off the auditctl command.

[code] -w /site-dir/wp-content/uploads/ -p wa -k upload_issue[/code]

Of course you need to make sure your auditd process is running and using chkconfig, etc. Good ole check status like:

[code]/etc/init.d/auditd status[/code]

Here are a few of the resources I used:

Please forgive the RedHat auth-walls…


My Favorite Httrack commands

HTTrack is a website mirroring utility that can swamp your disks with mirror copies of the internet. I’ve had to use it several times to make off-line copies of websites for all sorts of weird reasons. You’ll find HTTrack at: www.httrack.com. You can get a full list of command line options at: https://www.httrack.com/html/fcguide.html. There is a spiffy web and Windows wizard interface for HTTrack, but I gave that up.

This is the recipe for the command line options I’ve been using to produce a browse-able offline version of accreditation documents. This command says “Make an offline mirror of these URLs, go up to 8 links deep on these sites and 2 links deep on other domains. Stay on the TLD (.edu) and do it as quickly as possible. Be warned as it currently stands this will fill up about 1.5GB of disk space ;P.

[code]httrack http://www.nicholls.edu/sacscoc-2016/ http://www.nicholls.edu/catalog/2014-2015/html/ http://www.nicholls.edu/about/ -O /Users/nichweb/web-test -r8 -%e1 -%c16 -*c16 -B -l -%P -A200000[/code]

The great part is that the archive grows as URLs are added.

Apache log one-liners using tail, awk, sort, etc.

Good bunch of samples with other examples found at: https://blog.nexcess.net/2011/01/21/one-liners-for-apache-log-files/

# top 20 URLs from the last 5000 hits
tail -5000 ./transfer.log | awk ‘{print $7}’ | sort | uniq -c | sort -rn | head -20
tail -5000 ./transfer.log | awk ‘{freq[$7]++} END {for (x in freq) {print freq[x], x}}’ | sort -rn | head -20

# top 20 URLS excluding POST data from the last 5000 hits
tail -5000 ./transfer.log | awk -F"[ ?]" ‘{print $7}’ | sort | uniq -c | sort -rn | head -20
tail -5000 ./transfer.log | awk -F"[ ?]" ‘{freq[$7]++} END {for (x in freq) {print freq[x], x}}’ | sort -rn | head -20

# top 20 IPs from the last 5000 hits
tail -5000 ./transfer.log | awk ‘{print $1}’ | sort | uniq -c | sort -rn | head -20
tail -5000 ./transfer.log | awk ‘{freq[$1]++} END {for (x in freq) {print freq[x], x}}’ | sort -rn | head -20

# top 20 URLs requested from a certain ip from the last 5000 hits
IP=; tail -5000 ./transfer.log | grep $IP | awk ‘{print $7}’ | sort | uniq -c | sort -rn | head -20
IP=; tail -5000 ./transfer.log | awk -v ip=$IP ‘ $1 ~ ip {freq[$7]++} END {for (x in freq) {print freq[x], x}}’ | sort -rn | head -20

# top 20 URLS requested from a certain ip excluding, excluding POST data, from the last 5000 hits
IP=; tail -5000 ./transfer.log | fgrep $IP | awk -F "[ ?]" ‘{print $7}’ | sort | uniq -c | sort -rn | head -20
IP=; tail -5000 ./transfer.log | awk -F"[ ?]" -v ip=$IP ‘ $1 ~ ip {freq[$7]++} END {for (x in freq) {print freq[x], x}}’ | sort -rn | head -20

# top 20 referrers from the last 5000 hits
tail -5000 ./transfer.log | awk ‘{print $11}’ | tr -d ‘"’ | sort | uniq -c | sort -rn | head -20
tail -5000 ./transfer.log | awk ‘{freq[$11]++} END {for (x in freq) {print freq[x], x}}’ | tr -d ‘"’ | sort -rn | head -20

# top 20 user agents from the last 5000 hits
tail -5000 ./transfer.log | cut -d -f12- | sort | uniq -c | sort -rn | head -20

# sum of data (in MB) transferred in the last 5000 hits
tail -5000 ./transfer.log | awk ‘{sum+=$10} END {print sum/1048576}’

Using HyperDB to separate and share user and user_meta between WordPress installations

I need to remember to keep this example for some testing. This should be a good start for sharing a user and user_meta between websites. I do know that user_meta tends to have very site-centric settings at times. Original article was located at: http://wordpress.aspcode.net/view/63538464303732726666099/how-to-use-hyperdb-to-separate-and-share-a-user-dataset-between-wordpress-installs

$wpdb->add_database(array( //Connect to Users Database
‘host’ => DB_HOST, // I am using the same host for my two DBs
‘user’ => DB_USER, // I am using the same username for my two DBs
‘password’ => DB_PASSWORD, // I am using the same p/w for my two DBs
‘name’ => ‘my_user_db_name’,
‘write’ => 0, // Change to 1 if you want your slave site’s the power to update user data.
‘read’ => 1,
‘dataset’ => ‘user’,
‘timeout’ => 0.2,

$wpdb->add_database(array( // Main Database
‘host’ => DB_HOST,
‘user’ => DB_USER,
‘password’ => DB_PASSWORD,
‘name’ => DB_NAME,

function user_callback($query, $wpdb) {
if ( $wpdb->base_prefix . ‘users’ == $wpdb->table || $wpdb->base_prefix . ‘user_meta’ == $wpdb->table) {
return ‘user’;

Create a new Git repo from and old repo

How to extend an old repository as a full copy in a new repository. This preserves the history of the old repository. Future changes will not affect the old repository, but will be committed to the new repository.

This originally came from the info found at: http://stackoverflow.com/questions/10963878/how-do-you-fork-your-own-project-on-github

// This makes the new repo as a checkout of the old repo to a new directory.
# git clone https://github.com/nicholls-state-university/nicholls-2012-core.git nicholls-2015-core
// Change directory to new repo area
# cd nicholls-2015-core
// Change the origin to the new repo. Remember to make the new repo area.
# git remote set-url origin https://github.com/nicholls-state-university/nicholls-2015-core.git
// Push commits to new area.
# git push origin master
// Push all changes to repo, just making sure.
# git push –all

Git local repositories

These are some quick examples and notes related to using git with local repositories. Using local repositories can be helpful maintaining file changes without committing to larger repository systems like Github. Instead of syncing with a remote repository, synchronization and changes are committed to the local repository and recorded.

First we create a new local folder and initialize it as blank Git repository.

# mkdir my-local-git
# cd my-local-git
# git init —bare

Then we just clone that to the location we want and work on it like any other git repository.

# git clone /where/is/my-local-git