Blocking ad networks with named

I’ve meant to do this for ages, so on my first day of my “staycation”, despite vowing to myself that I wouldn’t look at a computer screen this week (hey, it’s not actually the technical start of my week off is it?), I fiddled this morning with BIND to try and avoid seeing ads on my devices. While AdBlock works great on my browsers, that doesn’t transfer well to mobile devices and apps with built-in advertising, etc.

Unless you’re running your own BIND DNS server at home, you won’t be able to do this. If you have a home network with named running (my local network does) and unless you restrict all outbound DNS and allow DNS lookups only from your named server (which I do, it forces all of the machines on the system to use my DNS server which is configured to only ask OpenDNS for DNS info), this also won’t really work for you (at least not in the way that I’ve done it).

So this assumes some knowledge of BIND and networking. This is not so much a tutorial on how to configure BIND as it is some quick tips and shared info on what I did this morning.

First you need to setup a master zone. Mine looks like this:

zone "" {
        type master;
        file "master/";

NOTE: You may also need the following in your options section, but I’m not 100% sure as it was there before:

    response-policy {
        zone "";

This makes anything defined in this zone to be considered authoritative, just like the DNS settings I have for my local network. As an aside, you can use this to block entire domains (like youtube or facebook if you have kids at home staring at screens all day…).

I then wrote a script which pulls data from MVPS Hosts. Their data is meant to be put into a hosts file, but that means it would only work on a single machine and I’m trying to solve a multi-machine/mobile issue, not just a single computer. The script takes my file and mashes in data from MVPS Hosts and to create a new file that we will use:


input=$(mktemp /tmp/mvps.hosts.XXXXXX)
output=$(mktemp /tmp/
serial=$(grep serial ${source} | awk '{print $1}')
n_serial="$(date +%Y%m%d)01"

curl -s >${input}

dos2unix -o ${input} >/dev/null 2>&1

lines=$(wc -l ${input} | awk '{print $1}')

if [ ${lines} -lt 10000 ]; then
    exit 1

for line in $(cat ${source}); do
    if [ "${line}" == ";START ADHOSTS" ]; then
    	echo ${line} >>${output}

echo "" >>${output}
echo ";START ADHOSTS" >>${output}
for hostname in $(cat ${input} | egrep -v '^#' | awk '{print $2}'); do
    if [ "${hostname}" != "localhost" ]; then
        echo "${hostname}    IN    CNAME    ." >>${output}
echo ";END ADHOSTS" >>${output}

perl -pi -e "s/${serial}/${n_serial}/g" ${output}

rm -f ${input}
cp -f ${output} ${source}
rm -f ${output}

Note that you need dos2unix installed. Everything else is fairly standard. The MVPS Hosts file seems to be updated monthly, so this something you could possibly add to a monthly cronjob or just run manually every once in a while. So far it seems to work pretty good over here. I had initially thought about writing something in python, but bash is just so much faster (for me).

Also, if you put things in your zone file before the “;START ADHOSTS” line they’ll be retained, so if you do want to block specific domains (you may want to block and if you don’t want to see iOS iAd ads) you still can, and take advantage of the MVPS Hosts list (if someone has a better list, I would love to see it).

I hope this helps someone else out. Comments for improvement are welcome, this was a pretty quick-and-dirty script that, I’ll admit, does a few things oddly.

Custom Email Notifications in GitLab

I started playing around with GitLab last month in order to get to know it a better and, while I like it well enough, the one thing that drove me nuts was the email that it sent out alerting of changes. My old git setup used the wonderful git-notifier script to send out emails and I much prefer the format it used than the format GitLab uses. Unfortunately, at that time, without ponying up for the enterprise edition it didn’t look feasible to change without some serious work that I didn’t have the time or effort to invest.

Yesterday I was looking at the latest version (7.6.2) and noticed the community edition support for custom hooks. After upgrading, I fiddled with it and git-notifier to try to make the two work well together. With a little elbow-grease (git-notifier works well with straight git repos or gitolite) I got it to work, although it is a bit of a nuisance because, with regular git or gitolite, you can get some information from the repo exposed via the calling scripts and environment that does not seem to be present in GitLab.

If you follow the instructions on the custom hooks document referenced above, you’ll end up with something along the lines of /var/opt/gitlab/git-data/repositories// .git/custom_hooks (in my case it is /srv/git-data/repositories// .git/custom_hooks). In this directory (which must be owned git:git, including all its contents) lives a post-receive script which looks like:



pushd ${base_dir}/${repo_name}.git >/dev/null 2>&1

/srv/git-hooks/git-notifier $@ --link="http://${git_host}/${repo_name}/commit/%s" --sender="${send_from}"  \
  --mailinglist="${send_to}" --repouri="ssh://git@${git_host}:${repo_name}.git" --emailprefix="[git/%r]"

popd >/dev/null 2>&1

I have git-notifier in a directory called /srv/git-hooks and it’s owned root:root and mode 0755. This will tell git-notifier to send an email to the $send_to address, from the $send_from address, and defines a few things like the repository itself and the host (all things that would normally be exposed via the environment in a git or gitolite setup but are lacking with GitLab). But this can be used as a template and the only thing you should have to change is the value of $repo_name (everything else can be the same unless you need to define them differently per-repo or per-group).

The downside to this is that you need shell access to set it up, which may prove troublesome for larger installations or shared environments. For a personal or work environment this is probably an ok requirement. Make sure that you disable the “Emails on push” service for the repository in GitLab or you’ll get both the stock GitLab commitdiff email and git-notifier’s email.

I’m extremely grateful for those who contributed this support to GitLab as it means I spent a lot less time dorking around with this than I would have had I done it all myself, and while it was a bit of a nuisance to setup, it works quite well and I’m back to getting my old style of email notifications which are much more useful (for one thing, GitLab seems to have an upper size limit and if that is exceeded it sends no mail at all, whereas git-notifier will send you a list of changed files without the actual diff… a much more useful and meaningful email than sending nothing at all (if you look at git-notifier’s changelog you’ll see that was contributed by me in version 0.3-18, almost 2.5 years ago… that’s how long I’ve been using git-notifier)).

I wish I could contribute some sane code back to git-notifier to support GitLab, but without GitLab exposing things like the repository name or committer name to the environment I don’t think it would be possible unless I’ve missed something non-obvious.

SSL Certificate Verification failure with fink’s Python 2.7.9

Python 2.7.9 was released nearly a month ago and with it came some SSL-related changes (it backported the Python 3.4 ssl module and does HTTPS certificate validation using the system’s certificate store). The latter can cause some problems with home-grown CA’s, however. On Mac OS X, the CA certificate store is in the Keychain Access application which isn’t exposed to commandline tools like Python. This will cause HTTPS certificate validation to fail because Python doesn’t know anything about the CA certificate used to sign the certificate being used by a HTTPS server.

If you’re using the system OpenSSL, supposedly you can export the CA’s of interest to the /System/Library/OpenSSL/cert.pem file (untested). I use fink and fink’s OpenSSL does not seem to use this directory. Instead it uses /sw/etc/ssl/ and if you install fink’s ca-bundle package you will have a stock /sw/etc/ssl/certs/ca-bundle.crt file which presumably works with some applications. This file can be replaced with an updated CA bundle containing the CA certificate that is used to sign the service(s) you want to connect to.

However, replacing that file is not enough. If you upgrade to Python 2.7.9 in fink and make that change, you will still see this annoying error:

[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)

when attempting to connect to a site using a certificate signed by a non-stock CA. Note that prior to 2.7.9, Python did not do this CA validation so you would not see this error until upgrading to 2.7.9.

The fix is quite simple. Put your new ca-bundle.crt file in place as noted above, and then, as root, symlink this file to /sw/etc/ssl/cert.pem:

# cd /sw/etc/ssl
# ln -s certs/ca-bundle.crt cert.pem

Now when using Python 2.7.9 (on a fink-using system) you will be able to connect to those sites and avoid the “certificate verify failed” error noted above.

Merry Christmas 2014!

I just wanted to wish everyone a Merry Christmas and Happy New Year, from my family to yours. My wife found the most awesome card for my teammates at work and it is just too good not to share with everyone — for those who are programmers or into IT, this is perhaps one of the most fitting cards for our industry. =)

God bless you all and my prayer for each and every person reading this is that this year has been good, but that next year will be even better!



I’ve refrained from posting or saying anything about Heartbleed all week because I didn’t want to add to any sensationalism and hype, and I’ve also been too busy actually dealing with it (as opposed to simply talking about it or running around with hands waving in the air like a mad man). Now that the dust has settled a bit, I just want to link to some sites that I think are good to keep handy as we see this play out. I don’t need to talk about the flaw itself as all you need to do is google “heartbleed” and you’ll get all the info you want; certainly more than I can provide here (although you will have to distill the sensational from the facts).

So, the sites:

  • Heartbleed Bug Health Report; they’re keeping it up to date, but it’s essentially a “top 1000 still-vulnerable sites” list
  • Mashable’s Heartbleed Hit List which has a list of some of the bigger sites/services that were (or were not) affected and whether they still are; when I looked this morning it was last updated as of last night so presumably they’re keeping it fairly up to date
  • DigitalTrends Mobile app list which has a list of vulnerable/not-vulnerable mobile apps
  • The Heartbleed site which is being kept up to date with regards to linking to various advisories

Some of these sites (and the apps that use such sites) have been fixed this week. There is speculation that this has been known for a while which means the “window of opportunity” may be much bigger than was initially thought. Some of the numbers being tossed around are pretty gross exaggerations though (one I saw was “66% of the internet vulnerable!”) so you have to take things with a grain of salt. The best advice is to look at the sites you use and if they have fixed the flaw (and were previously vulnerable) and recommend you doing something (like changing your password), strongly consider doing as they suggest — PROVIDED THEY HAVE ALREADY FIXED THE FLAW! Sorry for the caps but I talked to some people yesterday who had rushed to change their password and when I asked them if the site in question was fixed already, they gave me a blank stare.

It does you NO good to change your password to a site that is STILL vulnerable. You will only have to change it again.

Anyways, look at the sites noted above, breathe, and keep in mind that changing passwords occasionally is a good thing. Maybe now is the time to start using something like LastPass, 1Password, KeePass, or something similar and having it generate pure random nonsense for a password, knowing that you can use this tool/service to remember it for you (although, arguably, this whole situation makes me quite happy that I use 1Password (an app on my computer) instead of a service.

My last point on this is that people need to upgrade if they’re using an affected version of OpenSSL. If you are, and your operating system provides it (which is the case with Red Hat Enterprise Linux and Fedora, among many others) then you really should be updating to the packages provided. It’s not a question of whether you should or shouldn’t — you should. Period. This has been a crazy week and a lot of crazy things have happened and this is a really really bad thing IF you’re affected. So if you are (as in you’re running Red Hat Enterprise Linux 6.5 or a current Fedora, etc.) then you really need to update ASAP. And then you need to assess your next steps (changing passwords on vulnerable (and now fixed) services, revoking and reissuing certificates if you feel it necessary, etc.).

Anyways, that’s all I have to say about Heartbleed. It will be interesting to see what the next few weeks will be like as we continue to get a bigger picture of what’s happened here, how, and to whom. And to see what damage has been done, and who responded appropriately and when. For instance, if there were a site or service I was using and as of today (being Saturday, and this thing exploded on Monday) it was still NOT fixed, I don’t think I would be using that site/service anymore. To put it into perspective, Red Hat had updates out late Monday for Red Hat Enterprise Linux 6.5 and the other affected products early Tuesday morning (my time). Everything was available to customers in under 24hrs. It’s not hard to install — “yum update” and reboot (to make sure everything is covered). So for a site to be still affected by this now? There’s really no excuse as far as I’m concerned.

Finally, just to note that I did get some minor press coverage (so this is more vanity than useful), LinuxInsider reported on Heartbleed and my name is noted, although my answers to the questions must have been less than exciting as there wasn’t too much noted there other than where Red Hat customers could go for more info. =)

And to finish off, the obligatory xkcd:

20 years of tattoo collecting

So this year I’m going to be 38, which means that I’ve been collecting tattoos for 20 years. The biggest “rush” of ink has been in the last 5-6 years as I’ve actually been able to afford it, whereas before it was getting a piece done whenever I could spare a few hundred dollars (which wasn’t often) and it also meant the pieces were smaller. The challenge with the sleeves was that we had merge these things together to make it look a bit cohesive. I think, considering I have maybe 2-3hrs left to finish the right arm, that we’ve managed this pretty good. I want to sincerely thank Jared Phair of Crimson Empire for the amazing work he has done on both sleeves. He’s done a fantastic job with everything I’ve thrown at him. And now I’ve thrown my wife at him and Angela’s tattoo is looking amazing as well.

I invite everyone who’s interested to look at my tattoo set on Flickr, and Angela’s tattoo set on Flickr. There you will see all of the pictures. I did want to embed two pictures in my post, however, as I think they are quite amusing to compare.

This is today:

This is about 15 years ago (1999):

A lot has changed in 15 years!!