What is Google foo.bar?

A week or two ago, the following popped up on my screen during a search for a Python-related topic:

You're speaking our language. Up for a challenge?

I had seen this before after our CTO got the same mysterious message a few months ago. We initially thought it was another one of Google’s Easter eggs, but a quick search revealed that everyone from HN and Reddit to Business Insider seems to think it’s a recruiting move by the search giant. (A similar program was rumored to be a search for cryptoanalyists, but turned out to be related to The Imitation Game, so who knows?)

Update: it is recruiting portal. Both of us were contacted by Google and interviewed on-site. The actual interview is under NDA, but I’ll post more about the interview process itself later.

The first time around, we discovered that replicating the query doesn’t necessarily trigger an invite, and visiting the URL without an invite doesn’t work. It was suggested that the invites are sent to a subset of users who have enabled search history. When I got the invite a week or two ago, I registered and then hit the “Back” button. The query string was preserved, so we tried an experiment: Is the invite based on a tagged query string, or the result of some back-end processing? After sending the URL to a couple of coworkers who had not received an invite after searching the same query, they tried accessing the URL directly. We learned two things:

  1. Both of them subsequently received an invite.
  2. One of them hit “refresh” as the animation began to show the box, and no invite was shown upon refresh. Opening the link in an Incognito window gave him a second chance.

The most likely scenario is that certain queries redirect to the results page with a query string, which triggers the message. Since neither of the other developers write lots of Python, but still got an invite after visiting the link, it’s likely that Google doesn’t validate invitee status. I doubt this is a simple oversight, and more likely indicates one of two things:

  1. Invitees are not on some sort of pre-selected list; and/or
  2. Google isn’t worried about additional invitees.

The latter was proven when the program displayed a “refer a friend” link. Assuming the recruitment theory is correct, it’s likely that Google is operating under the assumption that high-quality developers will refer other high-quality developers. I don’t know for sure, but this is probably a valid assumption.

To clarify some of the speculation, I was asked if I’d like a Google recruiter to contact me after completing the first six challenges.

Well, there goes that theory.

Others have asked Google directly about the program, and received a Python snippet that prints “glhf” in response – essentially “no comment”.

A Quick Tour

The pseudo-terminal responds to *nix commands like ls, cat and less and features its own editor. Listing the directory shows a textfile

Contents of start_here.txt
Contents of start_here.txt

The help menu offers several possible commands:


The levels consist of at least 5! challenges, split into 5 levels where each level n has challenges. Challenges fall into one of five categories, or tags.
Google Foobar Tags

Unfortunately, there has only been one crypto challenge available so far, and I haven’t been able to score a low_level challenge.  Most of the challenges I’ve completed so far involve one-off applications of computer science problems – like whiteboard interview questions with a twist. Additionally, there are constraints on execution time and memory use, which prevent some naive implementations from passing the test cases. This speaks to the needs of a company like Google who requires, or at least desires, efficient implementations rather than generic Algorithms 101 approaches.

I’ll be posting my solutions to GitHub shortly, along with some explanations here.

What is Google foo.bar?

Visual Binary File Analysis with Python

Update: Added a colorize function:

With colorize

Here’s a quick Python script to visualize binary data. In the grayscale example, each pixel is the color of the bit value (0x00 – 0xFF). The same method is used for colorization, except the bit value is used to provide hue and value values for HSV colorspace (saturation is fixed at 0.99).

The cols parameter is the width of the image to be generated (in pixels). By default, the script generates a couple of different sizes. The height is calculated based on the width. Patterns tend to be clearer when the column width is a multiple of 8 (16, 32, 64, 128…), though that could depend on the format and type of data in the file.

As an example, here are some images from a 256-byte file generated with the following Python program:

with open('foo.txt', 'wb') as fd:
    for i in range(256):

Bytes in range(0,256)


./process_dir.py <dirname> <cols>

The program will generate images for each of the binaries in the specified directory, create an “index.html” file and attempt to launch it in the browser.


The generated image on the left is from a PNG file. A dark patch in the beginning with a mostly-uniform distribution is consistent with file headers followed by image data.

The image to the right is an OpenOffice Writer file. The striped area indicates a repeating pattern of bytes, which often separates the metadata header and content in word processor files. The example screenshot shows an image generated from a compiled binary.

This can also be used to visually approximate the amount of entropy in a file. A high-entropy file would have a uniform byte distribution, thus occupying all of the available colorspace. I’ll include a histogram function later. This would show the frequency distribution of the bytes as well.

Compare the outputs of the following files:

  • An MP3 file
  • /dev/urandom
  • A TrueCrypt container (AES with RIPEMD-160)
  • A plain text file
MP3 File
MP3 File
Data from /dev/urandom
TrueCrypt container

Visual Binary File Analysis with Python

TomatoUSB extremely slow (0.5 mbps) wifi with normal wired speed

I recently started using TomatoUSB on a Cisco/Linksys E1200 router and noticed that I had an extremely slow download speed (around 0.5 Mb/s) and an acceptable upload speed (5.5 Mb/s). After checking QoS settings and the “Bandwidth Limiter” tab, I found a forum post indicating the WMM settings severely affected the speed. Disabling WMM increased the speed to around 1.5 Mb/s.

To disable WMM, go to Advanced > Wireless and look for the WMM field.

Screenshot from 2015-03-08 21:35:37

This, along with a call to Comcast, has increased my WiFi speed to around 7 Mb/s (25+ wired).

TomatoUSB extremely slow (0.5 mbps) wifi with normal wired speed

Dovecot on Ubuntu 12.04: postmaster_address setting not given

Dovecot: Error reading configuration: Invalid settings: postmaster_address setting not given

status=deferred (temporary failure. Command output: lda: Error: user <user>@<domain>: Error reading configuration: Invalid settings: postmaster_address setting not given lda: Fatal: Internal error occurred. Refer to server log for more information. )

While the error message itself is quite clear (the postmaster_address setting is missing), some of the highly-ranked answers didn’t quite work for me.

First, check the output of:

dovecot -a | grep postmaster_address

Expect no results (the setting isn’t given, after all). If you do have results, check that the setting is declared correctly.

The ‘postmaster_address’ Setting

On Ubuntu 12.04, I found the postmaster_address setting is defined in two places:

  • /etc/dovecot/conf.d/15-lda.conf
  • /etc/dovecot/dovecot.conf

I found the setting at the top of /etc/dovecot/conf.d/15-lda.conf to be commented out. Intuitively, you might uncomment this line and provide a setting, but if that doesn’t work, open /etc/dovecot/dovecot.conf and search for the following section:

protocol lda {
    mail_plugins = sieve quota

Now, add postmaster_address:

protocol lda {
    mail_plugins = sieve quota
    postmaster_address = postmaster@domain.com

Finally, restart dovecot:

sudo service dovecot restart
Dovecot on Ubuntu 12.04: postmaster_address setting not given

EnvironmentError: “mysql_config not found” While Installing MySQL-Python

While running “pip install mysql-python” on a fresh installation of Linux Mint 17, the following error occured:

Traceback (most recent call last):
  File "", line 17, in 
  File "/tmp/pip_build_root/MySQL-python/setup.py", line 17, in 
    metadata, options = get_config()
  File "setup_posix.py", line 43, in get_config
    libs = mysql_config("libs_r")
  File "setup_posix.py", line 25, in mysql_config
    raise EnvironmentError("%s not found" % (mysql_config.path,))
EnvironmentError: mysql_config not found


This problem is caused by the ‘mysql_config’ file not being in your PATH, likely because it’s not there at all.


Ensure that the libmysqlclient-dev package is installed:

sudo apt-get install libmysqlclient-dev -y

If you are still getting the error after this, ensure that your MySQL library is in your path:

echo $PATH

If that’s still not working, you can edit the “setup_posix.py” file and change the path attribute to match your local installation:

mysql_config.path = "/path/to/mysql_config"

(Note that the python-MySQL can also be installed with apt-get install python-mysqldb)

EnvironmentError: “mysql_config not found” While Installing MySQL-Python

Error loading docker apparmor profile –

I recently installed Docker and came across an error while starting the daemon:

INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] +job init_networkdriver()
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0000] -job init_networkdriver() = OK (0)
INFO[0000] WARNING: Your kernel does not support cgroup swap limit.
FATA[0000] Error loading docker apparmor profile: fork/exec /sbin/apparmor_parser: no such file or directory ()

The error indicates that /sbin/apparmor_parser couldn’t be found. The easiest route is probably to just apt-get install apparmor, but I didn’t want to add apparmor to this machine for a number of reasons. Without apparmor, I couldn’t care less if the profiles are parsed, so I decided to substitute the binary with a shell script.

In this instance, the fork call probably just needs to find a file to execute and receive an exit code of 0.

sudo emacs /sbin/apparmor_parser
# Dummy program
exit 0;

After closing the file, be sure to chmod +x /s/bin/apparmor_parser to make it executable. This technique works because the program is looking for a binary to execute and will most likely check the return code (or output of stderr) of the callee. Note that this won’t always work, as some scripts and programs rely on program output, or a lack of program output (if not stderr).

If modifying programs in /bin/ or /sbin/ makes you uneasy, you can always add them to ~/bin/apparmor_parser. Recent versions of Ubuntu and Mint include a statement in .bashrc to include ~/bin in the PATH if it exists. (Of course, you can always export any arbitrary folder to your PATH too.)


Error loading docker apparmor profile –

Default parameter values in bash

Since it’s often easier to understand with an example rather than a detailed explaination, here are a couple of examples illustrating how to handle default variable values in Bash. In addition, it’s often useful to be able to use environment variables (e.g., to specify the path to a binary in a build script), so I’ve included that as well. All of the code is available on GitHub Gists.

#1 – Specifying a default value for a Bash variable

Here’s a quick and easy method to provide default values for command-line arguments in Bash. It relies on Bash’s syntax for accepting default variable values, which is ${VARNAME:-“default”}. The double quotes allow anything that normal variable expansion allows as far as I can tell.

#2 – Specifying a default value in a Bash function

This is really no different than above, but illustrates how you can rely on the. In this example, the interface name ($iface) can be specified as the first parameter. Each of the functions then uses the same method to gather its arguments, resorting to the “global” defaults (CLI args) if not specified. (Note that in Bash, variables are global in scope by default. To override this behavior, use the local keyword)

#3 – Command output as default variable values

It’s also simple to use the output of an evaluated expression as the default value. This is great for getting system information (username, current working directory, etc.) or information that is easily generated on the command line — date constructs, random passwords, etc.

#4 – Override default values with environment variables

The following script uses the ‘htpasswd’ and ‘openssl’ binaries, which are usually specified by the full path (output of ‘which htpasswd’). By prefixing the standard definition with ${ENV_VAR-$(which htpasswd)}, you can now ‘override’ the default value with the use of an export  statement.

The script also takes an optional first and second parameter, which default to the current user and a random password respectively. If a password wasn’t specified, show the generated password to the user (otherwise, don’t display raw password info).

Example #5 – Just Because

Just a shorter, harder-to-read version.

Example #6 – Exit with an error if parameter is empty

Sometimes the input must come from the user, and the script needs to terminate if the user hasn’t specified the correct arguments. This can be done by using a question mark instead of a default value:
This results in output like:

./foo.sh: line 2: 1: You must specify a username

Example #7 – Exit with an error if binary not found

This could probably be made shorter, but it works. This statement tries to fill the value of $ifconfig with either $IFCONFIG or the output of which ifconfig. If both are empty, the boolean OR || is triggered, which echos an error and returns 1. Still unsatisfied, the final OR is triggered, causing the script to exit with status 1. Structuring your exit codes like this allows this script to be used in a similar fashion inside of other scripts or crontabs.

Default parameter values in bash

BlackBag Tool – A Framework for Rapid Information Discovery

Last Update: 14-Nov-2014

I’ve decided to pick up on the BlackBagTool project, which is an attempt at a program/framework to find interesting information on a mounted hard drive. The end-goal is an application that allows an investigator to gather a 2-minute summary of the information on the drive and act as a springboard for the overall investigation. This is an attempt at nailing down a spec.


The layout consists of a series of Python modules and small scripts (installed to /usr/bin) that can be used in conjunction with each other. I’m debating whether or not to include an optional prefix on the command names for namespacing reasons.

The small, individual scripts can then be piped together or included in shell scripts to automate the discovery process. The python modules can also be imported into scripts or used in the REPL.

I’m also aiming to build an application around this set of tools that fully automates the task of:

  1. Take the mount directory as an argument
  2. Determine the operating system (based on files/paths/etc)
  3. Gather relevant OS files (/etc/shadow, ~/.bash_history, recent documents, etc)*
  4. Determine what applications are installed, and possibly which versions
  5. Gather relevant application data (recent files, configuration/settings, history, cookies, etc)
  6. Parse data according to known formats and process fields against known patterns (dates, email addresses, etc)

Email address in  tag.Interesting email addresses can be found in browser history Title fields.


  • dbxplorer – A module for automatically gathering information about databases on a computer (db files, tables, raw data). Working on support for MySQL and SQLite now.
  • fsxplorer – A module for filesystem scanning.
  • bbtutils – A utility module for gathering information in a consistent way
  • skypedump – A utility for dumping skype information (contacts, chat history, etc)
  • chromedump – A utility for dumping browser information from Google Chrome (history, downloads, favorites, cookies, autofill data, etc)
BlackBag Tool – A Framework for Rapid Information Discovery