Grid Computing: Installing Boinc Server on FreeBSD

I was asked by an academic client to install Boinc Server on FreeBSD. BOINC is a software platform for volunteer computing and desktop Grid computing. FreeBSD has a boinc-client port, but not a port for the server, so it must be compiled manually. Following the instructions located on the Boinc wiki for a FreeBSD installation would result in some errors.

Here is how to get started in FreeBSD:

Follow the normal prerequisite / prep instructions found here: All the prerequisites can be found in FreeBSD ports, install them including Python and /usr/ports/databases/py-MySQLdb . I installed this in a Jail, stripped down Apache to the bare minimum and set this up for my client so that Apache would automatically include his httpd.project.conf files after running make_project.

The following are some of the things I did differently compared to the Linux install:

svn co boinc
cd boinc

Without editing the configure script you’ll get the following error while making:

/usr/bin/ld: cannot find -ldl

To fix this, edit the ./configure script:

vi configure

and remove all occurances of “-ldl “:

:%s/-ldl //g

-ldl is not required in FreeBSD, it is available in libc.


./configure --disable-client --disable-manager --with-boinc-platform=x86_64-pc-freebsd

You may need to replace x86_64-pc-freebsd depending on your hardware. Check the list here:

If you ran `make` instead of GNU make`gmake` you’d get these errors:

“Makefile”, line 19: Missing dependency operator
“Makefile”, line 23: Need an operator
“Makefile”, line 27: Need an operator
make: fatal errors encountered — cannot continue



That’s it! Follow the rest of the instructions on the Boinc wiki.

Lock SFTP Users to Their Home Directory

The solution to the age old problem of locking SFTP users into their home directory is setting up a chroot environment. This normally requires that you copy the necessary binaries and libraries so that your jailed users can make use of the allowed tools for file transfer.

As of OpenSSH 4.9p1, things have gotten a bit easier. OpenSSH has two features that make the task of locking users into their home directories a piece of cake. They are:

  1. A built in SFTP subsystem.
    With a built in SFTP subsystem, you no longer need binaries and the required libraries to provide the services necessary in a chroot environment. OpenSSH provides an internal SFTP subsystem.
  2. The Match keyword.
    This allows you to target specific users or groups in the sshd_config file and specify settings particular to them, like a chroot option and ForceCommand internal-sftp.

Getting it working is simple.

  1. Add a group called sftponly and add the users who you’d like to lock into their home directories to that group.
  2. Edit your sshd_config file (/etc/ssh/sshd_config if you’re on FreeBSD) and add the following to the bottom:
    Match Group sftponly
            X11Forwarding no
            AllowTcpForwarding no
            ForceCommand internal-sftp
            ChrootDirectory %h
  3. That’s it, HUP sshd (/etc/rc.d/sshd restart if you’re running FreeBSD) and test it out.

Avoid spam/junk folders when sending mail from web apps

If you’re running a local mail client to send mail from your web application, you’ve probably already spent hours upon hours wondering why mail always ends up in the recipients spam/junk folder. No matter what combination of custom headers you pass to PHP’s mail function- yahoo, hotmail and maybe even gmail still marks it as spam. Have no fear. The solution is quite simple.

There are two options that are very similar. They are DKIM and DomainKeys. When I was first experimenting, I had installed DomainKeys only to realize it worked with gmail and yahoo, but not hotmail. After some reading I found that DKIMProxy implemented both DKIM and DomainKeys, so I ditched DomainKeys and went with DKIMProxy. This covers all the major email services and ISPs. Switching was easy, because they work exactly the same with only a few minor differences in syntax.

Read more about how those protocols work here:

Essentially, DKIM is a solution for email providers to verify that mail being sent from a particular domain is authorized. The way it works is that the domain owner inserts a TXT record into DNS for that domain. This record contains a public key. Each e-mail that gets sent must be signed using the private key, the signature gets placed into the header of the email. Email providers can then verify the authenticity of each email that is sent using classic pgp-like signature techniques. DKIMProxy is a service that runs on your server, works with your already existing mail server (sendmail, postfix, qmail, etc..) and can automatically sign your mail pieces.

Below I highlight the steps for setting up PostFix + DKIMProxy on FreeBSD to send mail from the local machine ONLY. This will be setup on an application server whose sole purpose is sending mail through the local relay, not receiving or relaying email from other machines. If you’ve already setup PostFix for wider purposes, you’re probably already pretty comfortable with it; just use the same techniques to specify the content_filter for any other listening service and skip my PostFix write-up. There will be some differences if you’re running this instance of FreeBSD in a jail (as I do), however I will document those differences and the security precautions you’ll need to take as well.

If you’re not using FreeBSD, you’ll have to substitute my use of ports with your distribution’s package management command. The location of your config files may also vary, as well as the start-up scripts; otherwise, it’s all the same.

Installing / Configuring Postfix:

cd /usr/ports/mail/postfix
make & make install

Choose yes to modify mailer.conf

Stop sendmail if it’s running:

cd /etc/mail
make stop

Edit your rc.conf file to turn off sendmail and enable postfix. Add the following:


If we only want to send local mail we can comment out the following line in /usr/local/etc/postfix/ :

#smtp      inet  n       -       n       -       -       smtpd

This is beneficial if we are running postfix in a jail that doesn’t have access to a loopback device. If you were running in a jail, but wanted to run an SMTP server as well, you could specify an IP address to listen on by prepending the line with the ip address as follows:      inet  n       -       n       -       -       smtpd

It’s fine to comment out though.

Edit your /usr/local/etc/ file according to your needs. Since I am just using this to send mail from my domain, but am using Google Apps to receive mail I do the following in :

myhostname =
inet_interfaces = loopback-only
mydestination = localhost

The default file has those specific parameters documented for your reference (RTFM).

Now start PostFix :

/usr/local/etc/rc.d/postfix start

Check /var/log/maillog for errors. If there is an error about reading /etc/aliases.db, you probably just need to generate it. Run the `newaliases` command and it should solve your problems.

Test sending mail from the local machine to make sure PostFix works. You can either use a PHP script or run sendmail manually. If it doesn’t show in your inbox, check the spam folder and /var/log/maillog .

Installing DKIMProxy:
You need to install the following Perl modules… CPAN doesn’t work well from a jail console, and if you’re like me and don’t want to install SSH or Tmux to fix your tty issues in jails, the following commands will install the needed Perl modules from the command line and works no matter what distribution you use:

perl -MCPAN -e'CPAN::Shell->install("Crypt::OpenSSL::RSA")'
perl -MCPAN -e'CPAN::Shell->install("Digest::SHA")'
perl -MCPAN -e'CPAN::Shell->install("Digest::SHA1")'
perl -MCPAN -e'CPAN::Shell->install("Mail::Address")'
perl -MCPAN -e'CPAN::Shell->install("MIME::Base64")'
perl -MCPAN -e'CPAN::Shell->install("Net::DNS")'
perl -MCPAN -e'CPAN::Shell->install("Net::Server")'
perl -MCPAN -e'CPAN::Shell->install("Test::More")'
perl -MCPAN -e'CPAN::Shell->install("Error")'
perl -MCPAN -e'CPAN::Shell->install("Text::Wrap")'
perl -MCPAN -e'CPAN::Shell->install("Mail::Address")'
perl -MCPAN -e'CPAN::Shell->install("Mail::DomainKeys")'
perl -MCPAN -e'CPAN::Shell->install("Mail::DKIM")'
cd /usr/ports/mail/dkimproxy
make && make install

Add the following to your rc.conf :


We won’t configure “in” because we are only interested in sending mail out right now.

Generate some keys:

cd /usr/local/etc/
openssl genrsa -out privatedkim.key 1024
openssl rsa -in privatedkim.key -pubout -out publicdkim.key

My /usr/local/etc/dkimproxy_out.conf file looks like this:

# specify what address/port DKIMproxy should listen on
# specify what address/port DKIMproxy forwards mail to
# specify what domains DKIMproxy can sign for (comma-separated, no spaces)
# specify what signatures to add
signature dkim(c=relaxed)
signature domainkeys(c=nofws)
# specify location of the private key
keyfile   /usr/local/etc/privatedkim.key
# specify the selector (i.e. the name of the key record put in DNS)
selector  selector1

You can just copy /usr/local/etc/dkimproxy_out.conf.example to /usr/local/etc/dkimproxy_out.conf and make the appropriate changes. The selector parameter is what we’ll specify later in DNS.

Quick Explanation
dkimproxy is going to listen on the ip address / port (10027) we’ve specified for mail and it’s going to proxy smtp traffic to the ip address / port (10028) we’ve specified, with the appropriate headers attached.

Security concern: If you’re using a jail and/or specified a public facing IP address to listen on, you’ve just opened up relaying for the world even if relaying is turned off in postfix. The reason is because you’ve opened a proxy that will listen publicly and forward internally to postfix. SOOO… If you’ve specified a public facing IP address, either change it to an internal-only or firewall it off.

Back to PostFix Configuration:

Now we need to configure postfix to use the DKIM proxy as a content_filter (sending mail out to it on port 10027) and then receiving it back on 10028.

We’ll be editing /usr/local/etc/postfix/ and since we are only concerned with sending local mail, you can edit the pickup line to reflect the following:

pickup    fifo  n       -       n       60      1       pickup
        -o      content_filter=dksign:[]:10027

and add the following somewhere below:

dksign  unix    -       -       n       -       10      smtp
    -o smtp_send_xforward_command=yes
    -o smtp_discard_ehlo_keywords=8bitmime inet  n  -      n       -       10      smtpd
    -o content_filter=
    -o receive_override_options=no_unknown_recipient_checks,no_header_body_checks
    -o smtpd_helo_restrictions=
    -o smtpd_client_restrictions=
    -o smtpd_sender_restrictions=
    -o smtpd_recipient_restrictions=permit_mynetworks,reject
    -o mynetworks=
    -o smtpd_authorized_xforward_hosts=,,localhost

Now restart postfix and start dkimproxy:

/usr/local/etc/rc.d/postfix reload
/usr/local/etc/rc.d/dkimproxy_out start

Check netstat -na to make sure you’ve got services listening on both 10027 and 10028 . If not, check /var/log/maillog for errors.

Your DNS Records:

You’ll need to add three DNS records:

@       3600    IN      TXT     "v=spf1 a mx ~all"
_domainkey      3600    IN      TXT     "t=y; o=~;"
selector1._domainkey    3600    IN      TXT     "g=*; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDhrfYr2KLFiU7Zo6H06LlhFBEpif/Tb7oBJvKdEIm1uED9FqJump/q6RSt3Yw1iuM3iBaQcPohGbdoGiuaJGOUWMOblsSXkAOWxl4lbI5UQ6zCTBpVdLVDVWJ0E3UW1YJs1crSBdmG9G3WghrvIRkHzxfDMqndIV5gliYt+nmqXQIDAQAB;"

Once your tests are complete you can remove t=y; More info

Where you see p=MIGfMA…, you need to concatenate the publicdkim.key file into one line without the header/footer. So for example my publicdkim.key file originally looks like this:

-----END PUBLIC KEY-----

You can see how my p=… is related.

If you are using a GUI DNS editor tool, you’ll have to figure out how to place these into DNS using that tool.

Now to test it out:

Assuming DNS has propagated, you can try sending a couple e-mails. tail -f /var/log/maillog while you do and you should be able to see the proxy in action. If you send mail to hotmail or google, you can view the message source and see the DKIM and DomainKeys header information as well.

You can also use the following tools which have been extraordinarily helpful to me when setting this up:

  • Send an e-mail from your server to:

It will respond with some helpful diagnostic information, but make sure you can receive an e-mail response to the reply-to field.

Submit your SPF record to evil Microsoft
If everything passes, you should submit your spf record to Microsoft. They are sort of like the really strong dumb kid on the block who can steal your candy, but can’t perform a DNS lookup on their own. Without submitting to Microsoft your mail will end up in the hotmail / SPAM folder with a senderid temperror. You can sign up for a hotmail account and view full message source to see where it fails, then burn your monitor, keyboard and clothes for having signed up for a Microsoft product.

DomainKeys DNS Testing Tools:

That should get you on your way. I’ve pretty much covered every hurdle I’ve come across when setting this up. You’ll find plenty others who have similar, but slightly different setups by doing a cursory Google search.

Wireless Guest Network With Web Authentication

In a previous post titled, “Home Wireless Router” I walked through my custom built FreeBSD, wireless router at home. In this post, we’ll add web based authentication for guests. Essentially, when an unknown users connects to our network and browses the web, we’ll display our own website with a note letting them know we’re watching. They’ll have to agree to behave before they can actually browse the internet on port 80.

This will build on the previous “Home Wireless Router” post; start there first and make the appropriate changes noted below.

Using dhcpd to setup a split network:

We’ll make the network for trusted users and the network for our untrusted guests. Perhaps, in the future we can even throttle their bandwidth!

Here is what my /usr/local/etc/dhcpd.conf file now looks like.

subnet netmask {
  pool {
          option domain-name-servers;
          deny unknown-clients;
  pool {
        option domain-name-servers;
        allow unknown-clients;
  option domain-name "CANAAN";
  option routers;
  option broadcast-address;
  default-lease-time 600;
  max-lease-time 7200;
group trusted {
        host phone { hardware ethernet 00:0e:08:d5:c9:af;}
        host dell { hardware ethernet 00:11:43:75:0d:89; }
        host android1 { hardware ethernet f8:7b:7a:f0:c8:4b; }
        host android2 { hardware ethernet f8:7b:7a:f0:71:8b; }
        host ipad { hardware ethernet 10:93:e9:41:8f:26; }
        host macmini { hardware ethernet 68:a8:6d:59:9f:c9; }

I’m embarrassed, I really didn’t pay for all of those Mac products. If it isn’t obvious, you’ll need to replace the hosts and their hardware Ethernet addresses with the actual hardware Ethernet addresses of your trusted machines.

Packet Filter

Our packet filter configuration (/etc/pf.conf) is going to look a lot different than it did when we only had a wireless nat, but it should be obvious what the differences do:

allowed_out="{http, https, ssh, domain}"
table  {}
set block-policy return
nat on $ext_if from to any -> ($ext_if)
nat on $ext_if from to any port $allowed_out ->($ext_if)
no rdr on $int_if proto tcp from  to any port 80
rdr on $int_if proto tcp from to any port 80 -> port 80

In my prior wireless post, I stripped out all my fancy variables. Since we are now referring to the internal and external interfaces quite often, we might as well make our lives a bit easier by using variables.

Nat’ing works as usual for the network, however only allows the ports we’ve specified in $allowed_out. The “rdr on” rule forwards port 80 on our untrusted network to our localhost’s port 80. So now, no matter what website our guests visit, they end up on our webserver! The “no rdr” rule excludes IP addresses listed in the “goodboys” table from our subsequent “rdr on” rule.

All we need to do once a guest authenticates is run a simple command from the command line to add their IP address to the “goodboys” list. That command would look something like this:

pfctl -t goodboys -T add

Setting up the web server
It’s pretty obvious what to do next. Setup a webserver on localhost. You can write any sort of authentication script you like. I’ve placed a simple note letting my guests know that I’m not a sucker, nor a socialist and that by using my network they agree to my terms. The link “I Agree” takes them to a simple php script:

$output=exec("/usr/local/bin/sudo /sbin/pfctl -t goodboys -T add " . $_SERVER["REMOTE_ADDR"]);
header ("location:");

It would be simple enough to do that in any other language.

But wait… pfctl needs to be run as root…

Right.. Install sudo and add this to your /usr/local/etc/sudoers file:

www ALL=NOPASSWD: /sbin/pfctl -t goodboys -T add 10.0.2.[2-254]

I used a regular expression so that a vulnerability in my script couldn’t give a crafty guest free reign over pfctl. I’m sure you could also play with the permissions of /dev/pf and make use of /etc/devfs.conf … but ehh.. I think this does the trick.

Oh.. and don’t forget to change the netmask on your gateway

Our network is now (not /24) in comparison to the previous “Home Wireless” post. You’ll need to make the change in ifconfig and rc.conf .

Maybe not so obvious pitfalls..

  • A crafty user could assign themselves an IP address on the trusted network. If this were a product and we were concerned with real authentication, we wouldn’t have a trusted “backdoor” network.
  • Only port 80 is forced into our authentication scheme. The user could still make use of any other protocol in the $allowed_out variable. Again, if this were a product, we’d keep other ports closed and open them using our “goodboys” table, which contains the list of authenticated users.


This isn’t going to be an AWK how-to, but an AWK-‘why’. If you want a quick AWK tutorial check out Grymoires’ site.

System Administrators, DevOps or whatever you want to call them these days often need to parse large amounts of data in log files in order to extract relevant data. In fact, this is an often asked interview quiz question for many ‘nix sys admin jobs.

“Here’s a log file and a ‘nix shell. Write a script that tells me x, y and z.”

Often, smart, sys admins opt for PERL, and there is nothing wrong with that. However, did you know AWK was designed specifically for generating reports of this kind? The principles and techniques will be the same as in PERL, but AWK gives you a neat framework for generating them.

Here is a sample AWK script that parses an Apache log file, and spits out a list of IP addresses that have generated 1000 or more hits and how many hits they’ve generated sorted in descending order.

#!/usr/bin/awk -f
    OFS="tt"                 #Set the output field separator
    print "IP Address", "Hits"    
/^[1-9]/ {                     #Parse lines that start with a number (IPv4).
    iphash[$1]++               #Increment IP in our associative array.
    sort="sort -k2 -nr"        #The sort command we'll use. Parameters may
                               #vary depending on your flavor of 'nix.
                               #You may need to replace -k2 with +2.
    for (i in iphash) {
        if (iphash[i] >= 1000) print i, iphash[i] | sort 
                               #AWK's output buffer benefits us here
    close (sort)               #close the sort pipe to flush output buffer
    print "TOTAL", NR          #print total number of records (hits).

Save it as ‘something.awk’, chmod it to executable and run `./something.awk /var/log/your.log`

Two-Factor Authentication On FreeBSD

In my prior post I made the case against a rotating password policy and suggested two-factor authentication as a password policy that worked. Two-factor authentication requires both a password that is memorized and an item you have to verify that you are who you say you are.

Two-factor authentication doesn’t have to be expensive. In fact, Google has developed a solution that makes use of the smart phone to generate time-based, one-time verification codes that are unique to each individual user and the machine they are logging in to. It’s called Google Authenticator.

Installing it on FreeBSD is simple, but it probably won’t behave as you intended out of the box. I’ll walk you through installing it and tailoring it to your needs.

Step 1: Install libqrencode. This is optional, but you’ll be happy you did when you see the QR code in your SSH terminal to be scanned by your smartphone. This doesn’t have to be installed first, in fact, you can install it after Google Authenticator is installed as well.

cd /usr/ports/graphics/libqrencode
make && make install

Step 2: Install Google Authenticator.

cd /usr/ports/security/pam_google_authenticator
make && make install

Step 3: Download the Google Authenticator app to your smart phone. Sorry, I can’t help you with this step. Search for it in the market place, it’s free.

Step 4: As the user you’d like to generate a key for, run the program (on your FreeBSD machine):


Choose ‘y’ for yes to update your .google_authenticator file and scan the QR code into your smartphone’s Google Authenticator app.
Step 5: Configure your OpenPam config file. Lets set it up to work for SSH… Using your favorite editor add the following line to the bottom of your /etc/pam.d/sshd file:

auth     optional     /usr/local/lib/

You’re “done!” If you’ve setup Google Authenticator for a particular account it will prompt you for the verification code that your smartphone now generates.


You’ve noticed that we used “optional” in the pam config file. This means the use of it is, well, optional. Not only can users login who have not setup Google Authenticator yet, but users who have set it up can get away without using it by leaving the verification code blank after being prompted for it.

If you want to require Google Authenticator for your users you can change that to “required.” However, you then face another problem: users who havn’t set it up yet will not be able to login.

The desired effect should be that users who have set it up are required to use it, but users who have not set it up can get away without it (at least during early deployment).

If you used LinuxPam, this would be easy. LinuxPam allows for conditional statements within the pam config file that upon matching, the specified number of following modules could be skipped. OpenPam, used by FreeBSD, to my knowledge doesn’t have that feature. So, to get the intended functionality we can apply a simple patch to pam_google_authenticator.c.

Warning: I suspect Google didn’t do it this way because with this patch, by setting the control-flag in the pam configuration file to “sufficient” you will essentially open a gapping security hole that allows anyone to log in without a password or a verification code to an account where google-authenticator is not setup. You have been warned. With this patch, only use the “required” control-flag in the pam configuration.

Modify pam_google_authenticator.c by adding the following code just above the ‘// Clean up’ comment:

  if ((rc != PAM_SUCCESS) &&
      (secret_filename != NULL) &&
      (access(secret_filename, F_OK) == -1) &&
      (errno == ENOENT)) {
     log_message(LOG_ERR, pamh, "No config file found, skipping authentication");
     rc = PAM_SUCCESS;

It’s pretty straight forward. If the secret_filename (by default .google_authenticator) doesn’t exist in the users home directory, then return the PAM_SUCCESS return code, thus satisfying the two-factor authentication requirement for users who have not set it up.

Enjoy. If you have trouble re-compiling with the changes, post a comment.

Password Policies

Policies that require users to change their password every couple of months do nothing to increase security. I’ll try to make the case that a rotating password policy does nothing to protect against these attacks, but instead encourages users to write down their passwords.

Lets look at a couple of ways in which passwords are often compromised.

  1. A hardware or software based key-logger / virus / spyware is harvesting passwords.
    In this case, the employee or user types a password on a compromised machine, either at home, at a hotel, Internet cafe or on a friend/family members computer.
  2. Password is obtained by someone sniffing network traffic.
    This occurs when only making use of a plaintext protocol.
  3. The server’s password list is compromised.
    You had an SQL injection vulnerability in your web app or your password file was compromised and someone managed to get the hashed list of passwords. If some of the passwords were simple and a basic hash function without salt was used, then some of the passwords could be obtained by a hash-table dictionary or brute force. 
  4. Password was so simple that someone guessed it or that it was brute forced.
    This is rare, but most people who don’t understand security think this is hacking thanks to improper portrayal of hackers in movies.

There are plenty of other ways in which a password can be compromised, but these will suffice for now.

The first and only question that needs to be asked in order to debunk the rotating password policy is this:

Once someone’s passwords is compromised, how does changing it six months later stop the attacker from using it today, tomorrow or next week? Even if the attacker is selling password lists on the black market, every criminal knows a list older than a couple days is useless.

A second point demonstrates the ridiculous nature of forcing people to change their passwords:

When you force anyone to change their password every couple months and demand that it be some complicated combination of alpha-numeric characters, you’re forcing them to write it down, and most uneducated users will leave their written passwords in a cubicle.

A better solution

Two factor authentication is the way to go if you don’t mind inconveniencing people and want to enforce a serious password policy. Two factor authentication requires two things: either a memorized password / private key and a physical item that either generates a time-sensitive token unique to the user or verifies the user via SMS. Usually this is accomplished with a little device that can go on your keychain, but now smart phone apps are capable of providing the same thing. I installed Google Authenticator on one of my servers.

With two factor authentication, even if someone obtains my password, they won’t be able to login without the addition of the physical device I carry around in my pocket. Can two factor authentication be broken? Yes, someone can use one of the methods above to steal my password, then hit me over the head with a hammer and take my device. Someone could also obtain the private key used to generate my one-time tokens, or break the algorithm or even obtain physical access to the server!


Security isn’t about making it impossible to break in. At the end of the day we can concoct a zillion scenarios where even the pentagon could be overtaken. Security is about plausibility and probability. The plausible and probable methods of compromising a machine are not protected by a rotating password policy. They are protected by two factor authentication.

An added note: real security comes from educating users.

Apache Denial of Service (DOS) Attack

There may be situations when you want to throttle the amount of requests a specific user or IP address can make to your website. This works great if you are using Apache as a reverse proxy for security, availability or performance reasons.  Back in the Apache 1.x days there was a module called mod_dosevasive that did just the trick.  Unfortunately it did not work as well in Apache 2.x.

A much better solution is to use a module called mod_security.  mod_security allows you to write sophisticated, stateful rules and take action based on particular conditions. Using mod_security you can do a lot more than DOS evasive maneuvers. You can filter for XSS, SQL Injection, Mail Header Injection and lots more. It uses Perl regular expressions for the win.

In preliminary tests the filter does not block search engine spiders (at least not the ones that count).

If you are using FreeBSD ports, you’ll also need to change the default:

SecRuleEngine DetectionOnly


SecRuleEngine On

and add:

SecDataDir /tmp

in /usr/local/etc/apache22/Includes/mod_security2/modsecurity_crs_10_config.conf

The following can be used in a VirtualHost directive, or included directly in httpd.conf. It can be easily tailored to suit your needs:

# Ignoring media files, count requests made in past 10 seconds.
SecRule REQUEST_BASENAME "!(css|doc|flv|gif|ico|jpg|js|png|swf|pdf)$" 
# This is where every other example online goes wrong.  We want the var to expire and leave it
# alone. If we combine this with the increments in the rule above, the timer never expires unless
# there are absolutely no requests for 10 seconds. 
SecRule ip:requests "@le 2" "phase:1,nolog,expirevar:ip.requests=10"
# if there were more than 20 requests in 10 seconds for this IP
# set var block to 1 (expires in 30 seconds) and increase var blocks by one (expires in 5 minutes)
SecRule ip:requests "@ge 20" "phase:1,pass,nolog,setvar:ip.block=1,expirevar:ip.block=30,setvar:ip.blocks=+1,setvar:ip.requests=0,expirevar:ip.blocks=300"
# If user was blocked more than 5 times (var blocks>5), log and return http 403
SecRule ip:blocks "@ge 5" "phase:1,deny,log,logdata:'req/sec: %{ip.requests}, blocks: %{ip.blocks}',status:403"
# if user is blocked (var block=1), log and return http 403
SecRule ip:block "@eq 1" "phase:1,deny,log,logdata:'req/sec: %{ip.requests}, blocks: %{ip.blocks}',status:403"
# 403 is some static page or message
ErrorDocument 403 "<html><body><h2>Too many requests.</h2></body></html>"

The above blocks users who send more than 20 requests in a 10 second period. They will be blocked for 30seconds unless this has be a frequent occurrence. If they were blocked more than five times within five minutes they will be blocked for five minutes.

Home Wireless Router: FreeBSD 8

Perhaps a future post will demonstrate the use of FreeBSD for wireless AP’s in a commercial environment with roaming. This post will demonstrate a basic home router setup.


  • My wireless card (ath0) is equipped with the Atheros chipset.
  • Ethernet Nic (re0) is connected to a cable modem.
  • Ethernet Nic (em0) is connected to a switch for wired internet access.


  • Internal NAT:
  • We’ll bridge (bridge0) em0 and ath0’s wlan device (wlan0).
  • ISC-DHCP31 will respond to DHCP requests.
  • Packet Filter (PF) will do our routing.

You will need to know what to replace with your own configuration (not much).

Step 1: Install & Configure ISC-DHCP31 Server

  1. `cd /usr/ports/net/isc-dhcp31-server`
  2. `make && make install`
  3. Add dhcpd_enable=”YES” to your /etc/rc.conf file
  4. My /usr/local/etc/dhcp.conf looks like this (be sure to change the domain-name and any other custom settings):
subnet netmask {
  option domain-name-servers;
  option domain-name "CANAAN";
  option routers;
  option broadcast-address;
  default-lease-time 600;
  max-lease-time 7200;

Step 2: Configure Network Settings

  1. Add the following to /etc/rc.conf
create_args_wlan0="wlanmode ap"
ifconfig_re0="dhcp"   #remember this is my cable modem, it gets an IP address via DHCP
ifconfig_bridge0="addm wlan0 addm em0"
ifconfig_wlan0="ssid chicken up"
hostname="CANAAN" #You'll want to change this.

Step 3: Configure Packet Filter

  1. Add the following to /etc/pf.conf
nat on re0 from to any -&gt; (re0)

REMEMBER: re0 is the ethernet device connected to my cable modem. Your setup WILL be different. Want to learn more about that Packet Filter rule? Here is an EXCELLENT tutorial:

Done! Who thought it could be so simple?

You can either restart your computer or:

  1. `/etc/rc.d/netif restart`
  2. `sysctl net.inet.ip.forwarding=1`
  3. `/etc/rc.d/pf start`
  4. `/usr/local/etc/rc.d/isc-dhcpd start`

MySQL and Offsite Failover: Part I

In a previous post, “DNS and Offsite Failover“, I documented the implementation of an automated, offsite, fail-over using DNS. For static websites there is nothing left to do except perhaps use `rsync` to keep the files up to date.

Unfortunately there is a lot more to think about for dynamic websites and applications that make use of a database.

  1. The databases need to be sync’d realtime.
  2. Keeping track of state is important, each server needs to know when it and its counter-parts are up, down, active or standby.
  3. In the event of a failover, they need to coordinate cleanup, merge changes and start replicating again.

With the level of complexity involved small businesses typically accept defeat and embrace the potential downtime. Some even plan for a manual failover in the event of a disaster.

There are two automated solutions that can be weighed in the balances.

The Reduced Functionality Offsite Failover
The easiest solution by far is to implement what I call the “Read-Only” or reduced functionality offsite failover. In this configuration we’ll keep another database sync’d offsite. In the event of downtime the offsite backup takes over but in a reduced functionality mode. If the site supports user login or performs transactions, they can be disabled temporarily. Interactive sites effectively become “read-only” for the time being.

This poses a problem for commercial sites whose business is to perform transactions. Potential sales are lost, but at least new and old customers can still learn about products and get information. Nobody receives an ugly “Server not found” browser error; in-fact a custom message can be crafted explaining that full functionality will return shortly. This would be great for university sites, magazines, journals or even popular blogs where the primary purpose of the site is to get information.

This makes the life of the system administrator easy because the primary server doesn’t need to keep track of state, never needs to merge changes later on and can continue as it was when and if the network connection is restored.

In fact, if it’s acceptable for the offsite server to be a day behind the primary, implementation can be as simple as our prior DNS solution and a nightly cronjob that looks something like this:

ssh 'mysqldump --single-transaction -uUSERNAME -pPASSWORD -h DBSERVER DATABASE' | /usr/local/bin/mysql -uLOCALUSERNAME -pLOCALPASSWORD LOCALDATABASE
$MYSQL &lt;&lt;EOF
delete from shared_sessions;
delete from main_cache_page;
delete from groups_cache_page;
insert into main_access (mask, type, status) values ('%', 'user', 0);
insert into groups_access (mask, type, status) values ('%', 'user', 0);

The above is an example of a Drupal site being placed into “read-only” mode. The ENTIRE database is pulled from the primary location via SSH (shared key is used instead of a password). The session table and cache is cleared and Drupal’s access table is modified to block all but the admin user from logging in. If the database is large it’s best to avoid this method and stick to replication. Replication will also give you real-time updates. We’ll cover that in the “fully functional implementation” in Part II. You can borrow elements from both while developing your own custom solution.

You can even set server variables in Apache:

setenv readonly yes

And in the theme layer detect the server variable ‘readonly’ to know when to hide the login box or display your custom reduced functionality nofication so that two separate code bases do not need to be maintained. A Drupal example:

  if ($region == 'login_slide' &amp;&amp; !$_SERVER['readonly']) {    // Don't display the login field if the server is in read-only mode.
    drupal_set_content('login_slide', pframe_login_slide());

Fully Functional Offsite Failover
There are few sites that can afford to deny customer transactions. Reduced functionality mode is the easiest solution to implement but isn’t very practical for most business models. It’s harder to implement a fully functional offsite failover. As stated prior this requires both machines to keep track of state and resolve data conflicts when the primary changes from down to up. The logic is increasingly more complicated when you take into account the transition back from secondary to primary. The delays introduced by DNS caching and ISPs not respecting TTLs can potentially lead visitors to both primary and secondary locations at the same time and throw our databases out of sync. This method requires a lot more thought.

We’ll address the implementation of the fully functional offsite failover in Part II.