Retrieving AWS Metadata from Environment Variables

When working with AWS EC2, it’s often handy to be able to reference certain information about an instance. The obvious solution is the AWS Metadata service, accessible with a simple cURL command. For example, to get the private IP address of an instance:

curl -s

Since I hate writing boilerplate code, and don’t need to make an HTTP request each time I want to know something about my environment, I put together a Docker image that fetches most of the available metadata and outputs environment variable settings. Also, I’ve been itching to write something in Go.

My primary use case is to create /etc/aws.env when a new CoreOS instance starts, but this should work on any system with systemd. To do this, aws-env.service should be installed and configured to run at startup. One way to do this is via cloud-config (EC2 User Data):

    - name: aws-env.service
      command: start
      enable: yes
      content: |
         # Contents from 'aws-env.service' go here

Alternatively, you can use it with eval:

eval $(docker run --rm -t cmattoon/aws-env)



Retrieving AWS Metadata from Environment Variables

Auto Scaling Group LifecycleHooks, SQS, and IAM

AWS offers Lifecycle Hooks for AutoScaling Groups that allow you to respond to a change in instance state. For example, publishing create/terminate events to an SQS queue:

aws autoscaling put-lifecycle-hook \
 --lifecycle-hook-name "${HOOK}_launching" \
 --auto-scaling-group-name "$ASG_NAME" \
 --notification-target-arn "$SQS_ARN" \
 --role-arn "$ROLE_ARN" \
 --lifecycle-transition "autoscaling:EC2_INSTANCE_LAUNCHING"

aws autoscaling put-lifecycle-hook \
 --lifecycle-hook-name "${HOOK}_terminating" \
 --auto-scaling-group-name "$ASG_NAME" \
 --notification-target-arn "$SQS_ARN" \
 --role-arn "$ROLE_ARN" \
 --lifecycle-transition "autoscaling:EC2_INSTANCE_TERMINATING"

The instructions for doing so are pretty straightforward, but I ran into an irritating error:

An error occurred (ValidationError) when calling the PutLifecycleHook operation: Unable to publish test message to notification target arn:aws:sqs:us-west-2:123456:my-sqs-queue.fifo using IAM role arn:aws:iam:1234:role/my-asg-role. Please check your target and role configuration and try to put lifecycle hook again.

All of the search results for that error turned up solutions involving incorrect IAM policies. This should not be the case if you simply add the AutoScalingNotificationAccessRole per the instructions. For reference, the correct settings are below.

In my case, however, it turns out that AutoScaling can’t publish to a FIFO queue. Recreating the queue as a standard queue fixed this problem for me.


    "Version": "2012-10-17",
    "Statement": [{
         "Effect": "Allow",
         "Resource": "*",
         "Action": [

You’ll also want to verify that a Trust Relationship exists on your Role that allows the autoscale service to assume said role:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": {
        "Service": ""
      "Action": "sts:AssumeRole"


Auto Scaling Group LifecycleHooks, SQS, and IAM

Install R Packages on Shiny Server Pro

If you’ve installed Shiny Server Pro as a user other than shiny, you might have experienced difficulty adding R packages. This is because Shiny Server Pro runs R as the shiny user, and running R -e “install.packages(‘foo’)” will install packages to the local user’s files only.

The solution to this is to su to the shiny user:

su - shiny

And run

R -e "install.packages('foo', repos='')"

Alternatively, this script will parse an R file looking for require statements and install the necessary packages. It isn’t very smart, so be careful.

Install R Packages on Shiny Server Pro

20 Essential Linux Commands

This post provides examples of everyday Linux commands. I wrote this as a quick orientation to the CLI for newcomers. Keep in mind that there are usually several ways to accomplish a task, and other command combinations or programs might be better suited to your needs. Don’t be afraid to google it.


  • apropros, man
  • ls, ll
  • pwd, cd
  • mv, cp
  • rm, shred
  • head, tail
  • more, less
  • wget, curl
  • cat, strings
  • grep, find

Get Help With Commands By Using MAN

The examples on this page represent a small fraction of the possible uses for each command. Many commands have tons of flags and arguments that allow them to adapt to many scenarios. Since it’s impossible to remember how each program works, its arguments, etc., most come with a manual page, or man page. The man command retrieves the manual for the program and displays it on the screen. While sometimes verbose, man pages are typically your best source of initial information. This is the Manual in  “RTFM“.

The man system has several different sections, each providing documentation on a specific aspect of the program:

  1. General Commands
  2. System Calls
  3. Library functions (Particularly the C STL)
  4. Special files and drivers (Typically in /dev)
  5. File Formats & Conventions
  6. Games and Screensavers
  7. Miscellaneous
  8. System Administration Commands and Daemons

Typing man <command> will generally display section 1, if it exists. Calling man on something else — for example, the C function pthread_join, will display page 3 by default. To view other sections, type man <page> <command>. Note that not all programs have manual pages. Of those that do, most don’t have manual pages in each section. On some systems, you can type “man <page>” and press TAB to view a list of pages available for that section.

To view the manual page for man itself, type man man.

Finding the right Linux command with apropos

The Unix Tools Philosophy aims for tools that serve a specific purpose and can be chained together and results in a seemingly-endless variety of tools to accomplish the same job. The apropos command will search the manual pages for terms and return a list of possible commands. This “search” feature isn’t full-featured and works mostly on keyword matching.

    apropos ftp
apt-ftparchive (1) - Utility to generate index files
ftp (1) - Internet file transfer program
netkit-ftp (1) - Internet file transfer program
netrc (5) - user configuration for ftp
pam_ftp (8) - PAM module for anonymous access module
pftp (1) - Internet file transfer program
sftp (1) - secure file transfer program
smbclient (1) - ftp-like client to access SMB/CIFS resources on servers

Sometimes the number of commands can be unwieldy (try “apropos user” or “apropos ip“). Since the apropos command doesn’t do well with terms like “add user” or “ftp upload”, it’s sometimes useful to join the output with grep. You can also pipe the output to more or less.

    apropos user | grep add
adduser.conf (5) - configuration file for adduser(8) and addgroup(8) .
addgroup (8) - add a user or group to the system
adduser (8) - add a user or group to the system
pam_issue (8) - PAM module to add issue file to user prompt
useradd (8) - create a new user or update default new user information

List directories with ls

ls is the command for listing a directory. Useful flags include:

  • -a (–all), which shows dotfiles
  • -l which provides a long listing that includes file size and type,
  • -h (–human-readable) shows filesizes in normal units instead of bytes (1.23 GB instead of 1234921293)
  • -S to sort by file size

Useful combinations:

Default ll command, plus human-readable sizes

    alias ll=”ls -alh”

List all tar files in current directory:

    ls *.tar

List tar files in long format:

    ls -l *.tar

List the largest 10 files in a directory and output size in human readable form:

    ls -lhS | head -n 10


Navigate the filesystem with CD and MC

Navigating around the filesystem is done with the change directory (cd) command. Instead of merely listing the contents of the /tmp directory (ls -l /tmp), you can move into the /tmp directory and list the contents of the current directory:

cd /tmp && ls -l

To return to your home directory, simply type cd with no arguments. You can also use the shortcut ~ to refer to files from your home directory. The two paths below describe the same location on the filesystem, assuming that the second command is run by the cmattoon user.



If you aren’t sure which directory you’re in, you can use the pwd (Present Working Directory) command to tell you.

Midnight Commander (mc) is a third-party application that some people find useful for navigating the filesystem, copying and moving files, etc. You can find more information on their site.

Move, Rename and Copy files with mv and cp

Copy a config file to a backup:

    cp config.ini config.ini.bak


cp config.ini{,.bak}

Copy the entire config directory and it’s contents:

    cp -r config/ config-backup

Remove files with rm and shred

There is no “undo” command for rm.

The rm command removes files basically forever, so be careful.

There is no “undo” command for rm. Some people choose to edit their ~/.bashrc or ~/.bash_aliases and add the following:

alias rm="rm -i"  # Ask for confirmation before deleting files.

Linux makes the process of deleting a file forever deceptively simple:

    rm DELETEME.txr

To indiscriminately remove everything in the “/tmp” directory:

    rm -rf /tmp/*

Note: Either “rm -rf /tmp/” or “rm -rf /tmp” would delete the “/tmp” directory itself.

rm My\ Document

For private information, you might consider using shred.

cp ~/Downloads/ImportantDocument.pdf /mnt/backup/ && shred ~/Downloads/ImportantDocument.pdf

The -s (–size) flag takes an optional filesize (e.g., “1M”, “100K”, “1G”, etc.) and -u (–remove) removes the file after it’s done shredding.

Get first and last n lines from file with head and tail

The head and tail commands retrieve the first and last n lines from a file or stdin. Pipe to either of these to pass output to other commands:

Shows first 10 lines of


Continuously output (–follow) the last screenful of information from /var/log/apache/error.log:

    tail -f /var/log/apache/error.log

Stores the last line of the log to $LAST_ENTRY

    LAST_ENTRY=$(cat /var/log/app/actions.log | tail -n 1)

Read large files one screen at a time with more and less

More and less are two programs that filter text output. They’re commonly used to page through large files, but can also be used to buffer output from programs. It’s a helpful habit to read config files with more or less, rather than open it with a text editor. In both programs, you can find help by pressing h and exit by pressing the q key.

    cat | less

Writes the output of the install process to the screen for later

    ./ 2>&1 | less

Although both programs are very similar, less is the newer one with more features. Specifically, less allows forward and backward navigation (via arrow keys and PgUp/PgDn) and doesn’t have to read the entire file into memory. This makes it more efficient on large files than it’s predecessor, more.

Use the slash key (/) to begin a search. While a search is in progress, the “n” key will move to the next result.

Download files with curl and wget

Since both curl and wget support HTTP/HTTPS and FTP, they are especially useful for interacting with web-based services like API’s and HTML forms. Both programs use the HTTP GET method by default, but are capable of others as well (POST, HEAD, PUT, etc..), and both support SSL. cURL supports even more protocols including Telnet, SCP, SFTP, POP3, IMAP, SMTP and LDAP, and a number of other features. 

Generally speaking, I prefer wget for downloading files and cURL for

Note: Ubuntu comes with wget, but you’ll need to install curl. CentOS and OS X are the opposite. You’ll probably need to download one or the other.

To download a file with wget:

    wget <URL>

If the URL contains special characters, or is pointing to a script, it’s sometimes better to wrap the URL in quotes and use the -O flag to specify an output file.

    wget “” -O image.png

Without the -O flag, wget would save the file as “get_image.php?id=1234&size=130” – which is unlikely to work as an image in any capacity.

While wget saves the file to the current directory (or the path specified by -O), curl’s default action is to write the output to stdout. To echo your current public IP address, you can run:


To download a file with curl, you’ll need to redirect stdout to a file:   

    curl “” > image.png

Curl and wget both have “quiet” or “silent” modes that suppress output. This mode is particularly useful for scripts and cron jobs where you don’t want extra output cluttering the screen.

curl -s “” > installer.tgz
wget -q “” -O installer.tgz

If you still want to see error output, but no progress bar, you can use -sS in curl. The lowercase -s is for silent mode, the uppercase -S for “show errors”.

For more details, type “man wget” or “man curl”.

Check disk usage with du and df

To check the amount of disk space available, use the df command (think “disk free space”). The du command will show you the amount of disk space used in the specified directory. Like the ls command, both df and du can output the human-readable filesize by using the -h flag.

Output of df -h will show the disk space for all mounted drives by default:


To see how much space the current directory is taking up, use du -sh. The -s flag means “summary”, and prints the total usage of all subdirectories. WIthout the -s flag, du will generate a report for each subdirectory. This feature can be useful for finding the largest n files in a directory. The following command finds the 10 largest subdirectories of the current directory. By piping the output of du into sort (-h sort by human-readable filesize, -r reverse), we can sort the files from largest to smallest. That output is then piped into head to retrieve the top 10 only.

    du -h . | sort -rh | head

Of course, you could pipe this output to more or less and peruse the entire list of directories, but there’s already a better tool for this: ncdu. (The “nc” alludes to the ncurses library used to render the user interface.) As you can see in the screenshot below, ncdu provides an easy way of tracking down large files.



Get file contents with cat and strings

The cat and strings commands are used to write file contents to stdout. The cat command will dump the raw file contents (in whatever form), while strings will print only printable characters. This feature makes the strings command a useful choice in identifying a file format or other initial discovery tasks.

strings vs. cat – Output of “strings /bin/true” is on the left; output of “cat /bin/true” is on the right.

Print raw binary data of /bin/true to stdout:

    cat /bin/true

See all human-readable strings in the “true” binary:

    strings /bin/true

Zero a file with cat:

    cat /dev/null >

Find what you’re looking for with grep and find

Grep (Globally search a Regular Expression and Print) is useful for finding strings in files (or stdout). The find utility is used for searching by file name, size, etc.

To find all PHP files with the string “@todo” (case insensitive) in the src/ directory:

    grep -i "@todo" src/*.php

Recursively search the src/ directory for files containing the string “@todo” (case-insensitive):

    grep -ri "@todo" src/

This uses the -r (–recursive) and -i (–ignore-case) flags. As you may suspect, the –recursive flag searches the directory recursively, while the –ignore-case flag ignores the difference between uppercase and lowercase characters.

Grep is also useful to filter output from commands or stdout:

cat /var/log/apache2/error.log | grep -i "fatal error"

Watch the error log for lines containing the IP address “”:

tail -f /var/log/apache2/error.log | grep ""

If you have multi-line output in the log, grep will cut off all but the first line. If you want to see lines on either side of the target line use the -A (–after-context) or -B (–before-context). For example, consider grepme.txt, a file with “This is Line #n” from 0-30. Both commands produce the same output:

    grep 20 grepme.txt -A 5 -B 3
    cat grepme.txt | grep "20" -A 5 -B 3
    This is Line #17
    This is Line #18
    This is Line #19
    This is Line #20
    This is Line #21
    This is Line #22
    This is Line #23
    This is Line #24
    This is Line #25

Other useful examples include -v, which inverts the match and the -L/-l flags that show filenames of lines matched instead of lines matched.

Show all lines in access log that don’t include “GoogleBot”:

tail -f /var/log/apache2/access.log | grep -v GoogleBot

Show the names of files in the current directory (and subdirectories) that don’t have “@license” in them:

    grep -riL "@license" .

Show the names of files that have “@todo” in them:

    grep -ril "@todo" .

Show all lines with “@todo” in the current directory (recursive). Exclude the “img” and “templates” directories from the search.

grep -ri "@todo" . --exclude-dir="templates" --exclude-dir="img"

The find command is useful for finding files based on filename, size, type, or other attributes. In its simplest form, the find command searches for a filename:

find ~/Downloads -name '*.tgz'

The above command searches the ~/Downloads directory for files matching the pattern ‘*.tgz’. Since no -type is specified, it’ll search for files or directories.

Let’s look for files (only) in ~/Downloads that are over 100 MB in size:

find ~/Downloads/ -type f -size +100M

To find files smaller than 100 MB:

find ~/Downloads/ -type f -size -100M

To search /var/log for files older than 30 days and delete them:

find /var/log -type f -mtime +30 -exec rm -f {} \;

You can also use the built-in -delete flag:

find /var/log -type f -mtime +30 -delete


20 Essential Linux Commands

Google SWE Interview Preparation

I interviewed at Google Pittsburgh a while back (as a result of Google FooBar), and while I signed an NDA regarding the interview questions, I can provide a brief overview of the process. Ultimately, I did not receive an offer, so take this for what it’s worth.


Google will email you some official interview preparation materials, which you should obviously review. They outline the process very thoroughly, as well as provide an outline of possible material. If you’ve prepared for technical interviews before, much of this content is not a surprise, but it would be foolish not to review everything they’ve sent.

How does their interview process work?

Typically, there are phone interviews, then an on-site interview. I skipped the phone interview stage because of FooBar, and went directly to the on-site interviews.
If you are selected after submitting an application, or re-apply, you’ll be asked to do a phone interview first.
Since I can’t offer guidance here, I’ll refer you to Google’s Interviews page for specific details.

How much time should I allot to studying?

This answer depends on how comfortable you are with your CS fundamentals. Most people dedicate at least a month, possibly more. A recruiter told me they’re not able to schedule interviews greater than 30 days ahead, but you have the option of contacting them later to schedule. From every interaction I’ve had (a couple recruiters, and the engineers on-site), they genuinely want you to be at the top of your game when you come in. Take your time. They’re almost too cool about making sure you’re prepared for the interview process.

On-Site Interview

The on-site interview can be done over one day or two. I’m not sure what game theory says here, but I went for the one-day interview. This consisted of five interviews, about 45 minutes each. (You’ll also meet up with another engineer for lunch, which isn’t really part of the interview process.) They’ve even put up an example interview on YouTube:

Do not expect them to ask about your past projects, resume, etc. I saw a lot of complaining on glassdoor about this (mostly from people who didn’t get an offer).

They’re less interested in your specific background and accomplishments than your ability to solve the problems presented, which seems to offend a lot of people. Furthermore, everyone I met was super friendly, except for one interviewer who really didn’t seem interested in stepping away from work to interview someone. I’m told this is happens most frequently in phone interviews, though.

Generally speaking, the problems I was presented had a brute-force solution and an elegant solution or two. If you reach a working solution, they’ll likely ask a few cursory questions about Big-O notation or what data structure you’re using, then ask you to iterate on your code to meet additional requirements, consume fewer resources, or otherwise refine your solution. While they might appear to be tricky questions, they’re really not out to get you. The problems are very much in line with the TopCoder Division I problems, and I’m told that being comfortable with solving those types problems correlates with success at Google.

I was able to solve two of the problems relatively easily, had difficulty with the third, and did not reach a working solution for two other problems. You are not necessarily penalized for not reaching a solution, but it obviously helps. I’m told they’re more interested in your thought process and approach than getting a working solution.

Review Comp-sci fundamentals

You should be comfortable discussing the various types of sorting algorithms, BFS/DFS, tree and graph manipulation, etc. You will be expected to talk intelligently about Big-O notation and discuss the running time and space constraints of the algorithms you design. You should be able to digest the problem and find the most appropriate data structure (array, stack, linked list, graph, etc). I did not have any problems that involved crazy complex algorithms or cutting-edge research.

Data Structures

  • Stacks & Queues
  • Binary Trees
  • Trie-Trees
  • Graphs


  • Sorting
  • Tree insertion, manipulation, and search
  • Stack/Queue problems

Practice, practice, practice

Commit to doing at least one practice problem each day. You will be expected to do one interview in a compiled language (C++, Java, or Go), but are permitted to do the rest in a common language of your choosing (e.g., Python). I’d venture to guess that nearly all Google engineers are polyglots, and as long as you’re not using Lisp or Prolog or something, you should be fine. Talk with your recruiter, or attend the prep session for answers to specific questions like these.

What Libraries Are Permitted?

Neither myself nor my interviewers were aware of a specific list of libraries that are allowed, but I was permitted to use common Python and C++ libraries (bisect, std::vector, etc.), as long as they didn’t solve the problem outright (e.g., Python’s sorted() function). You are not expected to implement everything from scratch either – they want to see modern, idiomatic programming.

Example Google Interview Questions

The internet has some specific interview questions that others have asked, but obviously Google’s engineers aren’t dumb, and Google itself is uniquely aware of what content people are searching for. They routinely change up the questions, and I’m told their validated question pool is sufficiently large that you can’t study the test. That being said, the questions I’ve seen online accurately reflect the difficulty level of the problems I had, but my problems were 100% unrelated. Be able to apply the basics.


As someone without a CS degree, the questions that I had weren’t entirely outside my grasp. I could almost see the right solution, but wasn’t quite able to implement some of them. More preparation would have definitely helped.

Many of the questions were related to problems I’d solved while practicing. The best I can do in describing them is this: they’re standard comp-sci problems, with a twist. They’re close enough to standard problems that they’ll expect you to use the appropriate algorithms and/or data structures, but modified slightly so that you’ll have to actually understand what’s going on. Rote memorization of quicksort, mergesort, etc. won’t do.

Google SWE Interview Preparation

NCCR – Ventilation

This is the first of a series of posts on the National Continued Competency Requirements (NCCR), each covering a core competency for prehospital providers.

Minute Ventilation

Ventilation with a BVM is arguably one of the most important skills of any prehospital provider. In cases where the patient is not apneic, it is critical that the provider is able to identify when ventilation is appropriate. In EMT class, students are often given the guideline that a respiratory rate less than 8 or greater than 24 requires assisted ventilations with a BVM. While this is perhaps a reasonable guideline, it is only a half-truth. Patients in even mild respiratory distress can have respiratory rates exceeding 24 breaths/min, but obviously do not require assisted ventilations, while a patient with a respiratory rate of 10 breaths/min may require assistance with a BVM. The decision to ventilate a patient should be based on the adequacy of breathing, not the respiratory rate alone.

The key to determining if breathing is adequate or not is the patient’s minute ventilation (MV), or minute volume of respiration, which is their tidal volume (Vt) multiplied by their respiratory rate (or frequency), f.

MV = Vt * f

The average tidal volume, or volume per breath, in a healthy adult is around 500mL, which is the size of a 16.9 oz bottle of water[i]. (This may also be estimated as 4-8 mL per kilogram of body mass). Using the formula above, we can determine the average minute volume to be somewhere between 6,000mL (12 breaths/min) and 9,000mL (18 breaths/min). In other words, a healthy, average-sized adult breathes at a rate of 6-9 liters per minute.

If you’ve ever wondered what constitutes “high flow” vs. “low flow” oxygen, this is it. High flow oxygen refers to administering oxygen at a flow rate greater than the patient’s own rate, while low flow is the opposite. For a 70 kg adult with a respiratory rate of 12 breaths/min, 8 liters per minute might just count as “high flow” oxygen. So why do EMT instructors make their students repeat “high flow oxygen at 15 L/min via NRB” ad nauseam?

Consider the average 70 kg adult with a textbook tidal volume of 500mL and a respiratory rate of 18 breaths/min. Their minute volume is (500 * 18 = 9,000 mL/min = 9 L/min). If their respiratory rate increases to 24 breaths/min, their minute volume is now 12,000 mL/min, or 12 L/min. At 30 breaths/min, which is not uncommon, this becomes 15,000 mL/min, or 15 L/min. Administering oxygen at a rate of 15 L/min ensures that they are getting 100% oxygen with each breath. On the other hand, if her breathing becomes shallower as the rate increases, this can cause a plateauing of minute volume (250 mL/breath x 30 breaths/min = 7.5 L/min).

Therefore, a determination of whether or not a patient’s breathing is adequate must be made based on an assessment of both tidal volume and respiratory rate. Assisting ventilations is required any time the patient’s own ventilations are insufficient, which can be due to an inadequate tidal volume, an inadequate rate, or both. If the patient is determined to be breathing adequately, positive pressure ventilation is not required, but you should continue to determine if other interventions are appropriate.

Alveolar Ventilation

A naive interpretation of the relationship between tidal volume and respiratory rate might suggest that there are an infinite number of combinations that result in the same minute volume. While mathematically correct, this interpretation ignores the presence of anatomic dead space that is present in the airways. Anatomic dead space refers to the parts of the airway that cannot exchange gasses.  Since gas exchange occurs in the alveoli, the anatomic dead space is the volume of the conducting airways (nose and mouth down to the terminal bronchioles). This dead space is around 150 mL on average (West, 1962) but can be larger due to devices like advanced airway adjuncts and SCUBA gear, which physically extend the airway.

Everything except the alveoli is considered dead space.

When dead space is accounted for in the minute ventilation, we can determine the amount of air that moves through the alveoli each minute. This is called the alveolar ventilation (Va) and is calculated by subtracting the dead space from the tidal volume, then multiplying by respiratory rate (Levitzky, 2013).

Va = (Vt – Vd) * f

This metric is more relevant to our assessment than minute volume, as it reflects the actual amount of air available for gas exchange. As tidal volume decreases, an increase in rate alone will not be sufficient; these patients require supplemental volume. Consider our prototype adult (70 kg, Vt = 500mL, f=16) whose tidal volume starts to fall. Initially, her alveolar ventilation is 5.6 L/min. When the tidal volume is cut in half (250mL), the alveolar ventilation falls to below a third of what it was originally.

Normal: (500 mL/breath – 150 mL/breath) * 16 breaths/min = 5.6 L/min

Hypoventilation: (250 mL/breath – 150 mL/breath) * 16 breaths/min = 1.6 L/min

Increasing her respirations to around 22 breaths/min would maintain her minute volume, but her alveolar ventilation would only rise to 2.2 L/min. To return to her initial Va of 5.6 L/min, this patient would have to breathe at around 100 breaths/min. This is obviously problematic. Instead, increasing the depth of ventilation, either with a BVM or through increased work of breathing, is a better way to increase overall ventilation. If your patient has an excessively high respiratory rate (> 30), intervention is necessary to prevent deterioration in their condition.

As you can see, management of a patient’s ventilations requires careful attention to both rate and depth of ventilation. Assisting ventilations should generally be done at a rate of 10-12 breaths/min (one breath every 5-6 seconds), while delivering enough air to see the chest rise and fall (Weiss, 2008).

Ventilation-Perfusion Ratio

Effective gas exchange requires ventilation and perfusion.
Effective gas exchange requires ventilation and perfusion.

When discussing alveolar gas exchange, alveolar ventilation (Va) is only one half of the picture. The other half comes from alveolar perfusion, which provides the red blood cells that transport oxygen and carbon dioxide throughout the rest of the body. When alveoli are not perfused, perhaps due to a blockage in the pulmonary vasculature (i.e., pulmonary embolism), this creates alveolar dead space. You may also hear the term physiologic dead space, which refers to the sum of anatomic and alveolar dead spaces. Since this post focuses on ventilation, I’ll assume alveolar perfusion is within normal limits here, and cover V/Q ratios in a separate post.

Effects of Ventilation on Cardiac Output

Exceeding these parameters can result in a decreased cardiac output, which is obviously undesirable. As a refresher, cardiac output is comparable to minute ventilation, as it is a function of heart rate and stroke volume. Stroke volume, of course, is the amount of blood ejected from the left ventricle with each contraction. As you inhale, your diaphragm contracts and accessory muscles lift the chest wall up and out, causing a larger cavity. In turn, this creates negative pressure that draws air into the lungs and allows venous blood to return to the right side of the heart.

Positive pressure ventilation (PPV), as the name implies, relies on positive pressure to force air into the chest cavity. In this instance, there is little or no cooperation from the diaphragm and accessory muscles, resulting in air being forced into a fixed-size cavity. (While the chest wall certainly expands to accommodate this increased volume of air, it is doing so reluctantly.) This positive pressure also obstructs the venous return to the heart, and decreases cardiac output.

Respiratory Failure

Respiratory conditions, like many other conditions, can be described in a spectrum ranging from mild distress to respiratory failure. Likewise, episodes can be acute, chronic, or chronic with acute exacerbation. While many of our respiratory patients require only comfort care, it is important to closely monitor your patient for signs of impending respiratory failure. Respiratory failure is characterized by inadequate oxygenation or inadequate alveolar ventilation. The hallmark sign of respiratory failure is deterioration in mental status. This is often accompanied by, or preceded by, a decrease in SpO2, cyanosis, accessory muscle use, grunting, and nasal flaring. Respiratory failure can be further classified by whether or not hypercapnia (elevated levels of carbon dioxide) is present. These patients are severely ill and will likely die without intervention. Once a patient is in respiratory failure, assisting ventilations with a BVM is the best treatment option. Again, this should be done at a rate of 10-12 per minute.

Bag-Mask Ventilation

Using a BVM is simple enough in theory, but numerous studies have shown that we’re just not that good at it. In fact, the AHA recommends against using a BVM in cardiac arrest when there is only one rescuer (mouth-to-mask is better). BVM ventilation is most effective when two trained rescuers are available: one to maintain the airway and mask seal and a second provider to ventilate the patient.

Since cardiac output is reduced during CPR (around 25-33% of normal), gas exchange is also reduced. Therefore, the AHA recommends tidal volumes of 500-600 mL (6-7 mL/kg), which is enough to produce visible chest rise. Be sure to avoid overzealous ventilation, as it can lead to gastric inflation as well as a reduction in cardiac output (Link MS, 2015).

Continuous Positive Airway Pressure (CPAP)

Patients in impending respiratory failure – exhibiting signs of inadequate oxygenation, but not a change in mental status – can sometimes be managed successfully with Continuous Positive Airway Pressure (CPAP). As the name implies, CPAP works to keep the airways open with positive pressure. (It also increases the A-a gradient, which I’ll cover in the post about ventilation-perfusion ratios.) This is primarily a problem during exhalation in which the small, flexible bronchioles are squeezed shut by the positive intrathoracic pressure surrounding them. By supporting these narrow airways and increasing the A-a gradient, oxygenation is improved and carbon dioxide can exit the body more easily.

The primary advantage of CPAP is mitigating respiratory failure and avoiding unnecessary intubation. Since CPAP does not ventilate the patient, the patient must be alert, able to obey commands, and have a respiratory rate greater than 8 breaths/min. As discussed above, positive airway pressure can impede cardiac function, so CPAP is contraindicated in hypotensive (SBP < 90 mmHg) patients or those with a suspected or known pneumothorax.


Recognition of respiratory failure is critical for any level of prehospital provider. As the patient begins to experience signs of inadequate oxygenation – cyanosis, decreased SpO2, and accessory muscle use, etc. – the provider should consider interventions to increase alveolar ventilation. CPAP and assisting ventilations with a BVM are basic interventions available to most prehospital providers. CPAP is indicated when the patient is inadequately oxygenated (SpO2 < 90% despite high-flow oxygen), but not yet in respiratory failure. Once the patient’s mental status or respiratory effort begins to deteriorate, immediate ventilation with a BVM is indicated. Ventilation should be performed at a rate of 10-12 breaths/min in most cases. Whenever possible, ventilation should be performed with a basic airway adjunct in place (OPA or NPA).





Levitzky, M. (2013, Jul 15). Alveolar Ventilation. Retrieved Oct 20, 2016, from LSUHSC School of Medicine – Dept. of Physiology:

Office of Academic Computing. (1995). Dead Space. Retrieved 10 18, 2016, from Johns Hopkins University School of Medicine:

Weiss, A. L. (2008). Focus On – Bag-Valve Mask Ventilation. ACEP News.

West, J. (1962, Nov). Regional differences in gas exchange in the lung of erect man. Journal of Applied Physiology, 17(6), pp. 893-898.

Link MS, Berkow LC, Kudenchuk PJ, Halperin HR, Hess EP, Moitra VK, Neumar RW, O’Neil BJ, Paxton JH, Silvers SM, White RD, Yannopoulos D, Donnino MW. Part 7: adult advanced cardiovascular life support: 2015 American Heart Association guidelines update for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. 2015;132:S444–S464.


NCCR – Ventilation

Unblocking your IP address in the WordPress All-In-One Security plugin.

Recently, I locked myself out of a WordPress site that used the All-In-One Security plugin to prevent logins from blocked IP addresses. After running some tests, the plugin did its job and blocked the IP range, preventing me from logging in to the admin. This problem presents as being unable to login with credentials you know to be valid and no error message on the login page. This can also be due to session issues, or disabling login error messages altogether.

To verify that the security plugin is the problem, you should check out the security log. From the repository root, the file can be found at all-in-one-wp-security/logs/wp-security-log.txt. In the more likely event that you installed the plugin via WordPress, look for wp-content/plugins/all-in-one-wp-security/logs/wp-security-log.txt.

Unblocking Your IP Range

You’ll need command line access, or the ability to run SQL commands. Any of the following should work:

  • PHPMyAdmin
  • Ability to upload and run PHP scripts.


Log in as an admin, or with your WordPress database credentials. Select the appropriate database, and find your IP address with a site like

First, check that the table exists. If you use table prefixes, be sure to modify the table name in these examples to match your table name.

SHOW TABLES LIKE '%login_lockdown%';

If you see a table name, you’re in good shape. If not, you’re probably in the wrong database, not using the same plugin, etc. In the next step, verify that your IP address range is blocked. Use the % character to do wildcard searches, since it can block an entire range.

SELECT * FROM aiowps_login_lockdown WHERE failed_login_ip LIKE "123.45.67.%";

If results are returned, look at the rows and see which usernames/times are safe to unblock. If all the rows are safe to unblock, the following command will delete everything you just selected.

DELETE FROM aiowps_login_lockdown WHERE failed_login_ip LIKE '123.45.67.%';

To delete specific entries, use the ID column to specify which rows to delete:

DELETE FROM aiowps_login_lockdown WHERE id IN (324, 325, 326);

You should now be able to log in!


The PHPMyAdmin method is similar to what’s listed above:

  1. Find the appropriate database
  2. Locate the login_lockdown table
  3. Find any records that are blocking your IP range
  4. Delete them


Unblocking your IP address in the WordPress All-In-One Security plugin.

Find Your MySQL Username/Password in WordPress

If you need to manually manage your MySQL database associated with a WordPress installation, you’ll need to get the proper credentials first. Database connection information usually consists of:

  • Username (DB_USER)
  • Password (DB_PASSWORD)
  • Database name (DB_NAME)
  • Database host (DB_HOST)
  • Database port (WordPress assumes MySQL’s default port of 3306)

This information can be found in your wp-config.php. To show all lines of wp-config.php that have “DB_” in them, run the following command from the terminal:

grep -r 'DB_' wp-config.php
define('DB_NAME', 'wordpress');
define('DB_USER', 'username');
define('DB_PASSWORD', '********');
define('DB_HOST', 'localhost');
define('DB_CHARSET', 'utf8');
define('DB_COLLATE', '');

This information can now be used to log in to MySQL’s command-line interface:

mysql -u username -p

Leaving the “-p” parameter empty will trigger MySQL to prompt you for a password. On a *NIX server, it will look like you’re not typing anything — this is by design. While you may specify the password in the same line, this can leave your plaintext password in your command history, which is easily readable. If you want to use this format anyway (i.e., in a script), note that you cannot put a space between the “-p” flag and your password:

mysql -u username -ppassword

Once you’ve logged in, you can view available databases with the show databases; command. To use your wordpress database, take the value from DB_NAME (above) and use the use command: use wordpress;. To see available tables in the selected database, run show tables;.

Find Your MySQL Username/Password in WordPress