Bill Agee's blog

Technology musings with a twist of user empathy.

Dockerized Ghostdriver Selenium Tests

(tl;dr: This post describes how to build a Docker image for running Python GhostDriver/PhantomJS tests in a container.)

Background

In a previous post I described how to set up an environment to run automated Selenium WebDriver tests using the Ghostdriver/PhantomJS/Python web testing stack.

But these days, that’s the sort of setup chore which you might consider Dockerizing, to avoid needless manual repetition of your setup recipe. (Because if your testing project gets any sort of traction at all, you’ll inevitably need to replicate your environment on multiple machines.)

A side note about reusing images

In this post we’ll be building (from scratch) a Docker image capable of running GhostDriver tests, but in other situations, consider searching for an existing Docker image that does what you want - you might find one and be that much closer to your goal, whatever it may be!

To get started searching for images, see https://hub.docker.com/

Outline

Down to business - let’s build a Docker image.

This is an outline of the steps we’ll be performing:

  1. We’ll create a Docker image which contains:

    • Ubuntu 14.04 (as the underlying base image)
    • PhantomJS
    • Python 2.7 and pip
    • The Selenium WebDriver Python bindings
    • A Python script that uses Ghostdriver and PhantomJS to perform a Google search test
  2. We’ll then use that image to run a container that:

    • Executes the Python Google search test script
    • Exits with an error if the test fails
    • Automatically removes the container when the test is complete
  3. Next, we’ll do some interactive work in a container, by:

    • Launching bash in a container (instead of the search test script)
    • In the containerized bash shell, we’ll edit and manually run the modified test script
  4. We’ll create a Makefile to repeat the tasks above with fewer keystrokes.

  5. Finally, we’ll push the image to a public repo on Docker Hub.

Whew! Let’s begin.

1. Creating your Docker image

  • First, if you don’t have Docker installed, follow the Docker Engine install guide for your OS, at:

https://docs.docker.com/engine/installation/

I’m using a Mac to write this guide - specifically, I have Docker Toolbox 1.8.2a installed on OS X 10.11.

  • For the first step, create a new dir and cd into it:
1
mkdir myimage && cd myimage
  • Now create a file named Dockerfile in the myimage dir.

Inside the empty Dockerfile, paste these lines:

1
2
3
4
5
6
7
8
9
10
11
12
13
FROM ubuntu:14.04

# Install the phantomjs browser, Python, and the Python Selenium bindings
RUN apt-get update && apt-get install -y \
        phantomjs \
        python2.7 \
        python-pip \
        && pip install selenium

# Run a Ghostdriver demo script
ENV my_test_script=google-search-test.py
COPY ${my_test_script} /
CMD "/${my_test_script}"

Notice the line ENV my_test_script=google-search-test.py

That sets the my_test_script environment variable to the name of an executable script, which subsequently gets copied to / in your image (via the COPY instruction on the next line).

And eventually when we reach the point of launching a container, that Python script will be executed by way of the CMD instruction you see at the last line of the Dockerfile.

  • Now, create the file google-search-test.py in the same dir as your Dockerfile, so that the COPY command has something to act on.

For that script’s content, you can start with a simple hello world example:

1
2
3
#!/usr/bin/env python

print "Hello world!"

Or, you could go for the gusto and use a more complete GhostDriver script, such as the one from the Github repo related to this post:

https://github.com/billagee/ghostdriver-py27/blob/master/google-search-test.py

  • Once the google-search-test.py file has been created, you need to make it executable so that it will also be executable in the container:
1
$ chmod 755 google-search-test.py
  • Now, try building your image:
1
docker build --rm --force-rm -t myrepo/ghostdriver-py27 .

Make sure not to omit the build command’s trailing .

The build command output should show the apt-get update and apt-get install output, and eventually show your Python script being copied into the image.

2. Running a Docker container

Now try executing your Python script in a container, with docker run.

1
docker run --rm myrepo/ghostdriver-py27

When the container exits, you should see the output of your Python script.

e.g., if your Python script is the hi world example, the run output should resemble:

1
2
$ docker run --rm myrepo/ghostdriver-py27
Hello world!

And here is the output when running the GhostDriver example script from https://github.com/billagee/ghostdriver-py27/blob/master/google-search-test.py:

1
2
3
4
5
6
7
8
9
10
$ docker run --rm myrepo/ghostdriver-py27
.
----------------------------------------------------------------------
Ran 1 test in 2.350s

OK
Navigating to 'http://www.google.com'...
Checking search box presence...
Performing search request...
current_url is now 'http://www.google.com/search?hl=en&source=hp&biw=&bih=&q=selenium&gbv=2&oq=selenium&gs_l=heirloom-hp.12...185.189.0.214.8.1.0.0.0.0.0.0..0.0....0...1ac.1.34.heirloom-hp..8.0.0.pxP-og9Td_o'

Note that if you make changes to the Python script, re-running the docker build command will add your new changes to the image.

Also take note of the --rm option, which causes docker run to destroy the container on exit. This is nice when rapidly making changes and re-running containers - when working in that fashion, it’s better to have less container cruft to clean up later.

3. Getting an interactive shell in a Docker container

If you’re new to Docker, you might be wondering how to launch a shell in a container and use it interactively.

Here’s one way to do it - you can pass the name of an executable to run to docker run, which will override the Python script payload specified in the Dockerfile’s CMD line.

Passing bash as the executable and using the -it options to docker run will give you a bash shell with which you can do anything you like - for example, installing more packages, modifying and re-running your test script, or experimenting with other changes you’re considering adding to your Dockerfile.

The full command to get an interactive shell in a container looks like:

1
docker run -it myrepo/ghostdriver-py27 bash

You should then see a shell prompt, which you can use to run arbitrary commands (as root) in your Ubuntu container.

For example, you might check the container’s phantomjs version, or check the kernel and OS versions:

(Note I’m testing on a Mac running Docker Toolbox, so the uname output may differ from yours.)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
root@9ed850542508:/# phantomjs --version
1.9.0

root@9ed850542508:/# python --version
Python 2.7.6

root@9ed850542508:/# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 14.04.3 LTS
Release:    14.04
Codename:   trusty

root@9ed850542508:/# uname -a
Linux 9ed850542508 4.0.9-boot2docker #1 SMP Thu Sep 10 20:39:20 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

In the container, the Python script that you placed in the image with COPY ${my_test_script} / can be found at /google-search-test.py:

1
2
root@05f05f9d7537:/# ls -la /google-search-test.py
-rwxr-xr-x 1 root root 1065 Feb 14 04:08 /google-search-test.py

Another useful thing to do is to install your favorite editor, edit the container’s Python script, then run the modified script manually:

1
2
3
4
5
6
7
8
9
root@05f05f9d7537:/# apt-get install vim -y

# ...snip package installation output...

root@05f05f9d7537:/# vim /google-search-test.py

# Make some edits, then launch your modified script:

root@05f05f9d7537:/# /google-search-test.py

NOTE: When you exit the containerized shell, if the container was launched with docker run --rm, the container will be destroyed, along with any changes to files you made while interactively working within it.

But if you don’t use docker run --rm, once you exit the container shell, you’ll see the container in the output of docker ps -a:

1
2
3
4
5
6
7
8
$ docker run -it myrepo/ghostdriver-py27 bash

root@8a563421bdb3:/# exit
exit

$ docker ps -a
CONTAINER ID        IMAGE                       COMMAND             CREATED             STATUS                      PORTS               NAMES
8a563421bdb3        myrepo/ghostdriver-py27     "bash"              7 seconds ago       Exited (0) 2 seconds ago                        mad_curie

To remove the container manually you can pass its CONTAINER ID or NAME to docker rm:

1
2
$ docker rm 8a563421bdb3
8a563421bdb3

4. Creating a Makefile

This step is completely optional, but you may find it convenient.

If you’re going to be frequently building your image and running containers on the command line, a Makefile can provide convenient shorthand commands to accomplish those tasks.

For example, building your image could look like:

1
$ make build

And to run a container:

1
2
3
$ make

# ...or `make run` if you want to be explicit

Launching a containerized shell could look like:

1
$ make shell

If you’re not familiar with Makefiles, setting one up simply involves creating a file named Makefile (in this case, you should put it in the same dir with your Dockerfile).

The example Makefile in the gist below provides build, run, shell, and clean targets - the latter deletes your local image using docker rmi.

If you don’t want your Makefile to use the example image name (myrepo/ghostdriver-py27) used in this post, just change the value of the repo_name variable in the Makefile.

  • NOTE! If running make gives you a separator error like:
1
2
$ make
Makefile:11: *** missing separator.  Stop.

…then check to make sure all indentation in your Makefile is done with tab characters. If all else fails, use wget or curl to download the Makefile gist shown below. For example:

1
wget https://gist.githubusercontent.com/billagee/a11874bb83d54ffcfaf8/raw/f4cd1e0bd88d56959286774adba77a81e7d2f20d/Makefile

5. Pushing your image to Docker Hub

If you want to take the next step toward sharing your image with other users via Docker Hub, here’s how to do that.

  • First, create a Docker Hub account at hub.docker.com.

  • With that done, you can log in to Docker Hub’s web UI and use the Create Repository button to make a new repo.

Set the repository name to whatever you like (e.g., experiment), and choose whether to make the repo visibility public or private. Clicking the Create button wraps things up.

  • To push your existing local image (myrepo/ghostdriver-py27) to Docker Hub without rebuilding it under a new name, you can perform these steps on the command line:

    • docker login
    • Tag your existing image with the new repo’s name: docker tag myrepo/ghostdriver-py27 YOUR_DOCKER_USERNAME/YOUR_REPO_NAME
    • Push the image to its new repo in Docker Hub: docker push YOUR_DOCKER_USERNAME/YOUR_REPO_NAME

Note you’ll need to replace YOUR_DOCKER_USERNAME/YOUR_REPO_NAME with your Docker username and the Docker Hub repo name you chose - e.g., I used billagee/experiment, which looks like this on the CLI:

1
2
3
$ docker tag myrepo/ghostdriver-py27 billagee/experiment

$ docker push billagee/experiment

Once that step is completed, others will be able to docker pull your image.

Also on the topic of sharing images, the Github repo linked here shows an example of the finished Dockerfile, Makefile, and GhostDriver script produced by completing the steps in this post:

And here’s a Docker Hub repo linked to that Github repo - you can retrieve the latest image from this repo with docker pull billagee/ghostdriver-py27

An interesting feature to point out: A Docker Hub repo (like the one above) linked to a Github repo can be set up to build and push your image automatically when changes are made to the Github repo. You can also manually trigger builds in the Docker Hub web UI, or with an API call.

As an example, here are the results of a manually-triggered build of my image:

https://hub.docker.com/r/billagee/ghostdriver-py27/builds/bycpbhriwttas2uuxkmbcu4/

For more info on the topic of automated builds, see https://docs.docker.com/docker-hub/builds/

Signing off until next time - and viva la containerism!

Scrolling to an Element With the Python Bindings for Selenium WebDriver

When using Selenium WebDriver, you might encounter a situation where you need to scroll an element into view.

By running the commands in the following steps, you can interactively try out a solution using Google Chrome.

1. Set up a scratch environment and install the selenium package

My usual habit when starting a scratch project like this one is to set up a clean Python environment with virtualenv:

1
2
3
4
5
6
# In your shell:
mkdir scrolling
cd scrolling/
virtualenv env
. env/bin/activate
pip install selenium

2. Launch the Python interpreter, open a Chrome session with the selenium package, and navigate to Google News

1
2
3
4
5
$ python
...
>>> from selenium import webdriver
>>> d = webdriver.Chrome()
>>> d.get("http://news.google.com/")

3. Identify the element that serves as the heading for the “Most popular” section of the page, then scroll to it by executing the JavaScript scrollIntoView() function

1
2
3
4
>>> element = d.find_element_by_xpath("//span[.='Most popular']")
>>> element.text
u'Most popular'
>>> d.execute_script("return arguments[0].scrollIntoView();", element)

Note that the Python WebDriver bindings also offer the location_once_scrolled_into_view property, which currently scrolls the element into view when retrieved.

However, that property is noted in the selenium module docs as subject to change without warning - and it also places the element at the bottom of the viewport (rather than the top), so I prefer using scrollIntoView().

4. Scroll the element a few px down toward the center of the viewport, if necessary

After the code above scrolls the element to the top of the window, you may find you need to scroll the document backwards to scoot the element slightly towards the center of the window - this can be necessary if the element is hidden under another element (for example, a toolbar that blocks clicks to the element you’re interested in).

Such scrolling is easy to do - this JS scrolls the document backwards by 150px, placing your element closer to the center of the viewport:

1
>>> d.execute_script("window.scrollBy(0, -150);")

That’s all for now; I suspect I’ll continue to run into other types of element scrolling issues when using WebDriver, so this post may become the first in a series!

Setting Up ChromeDriver and the Selenium-WebDriver Python Bindings on Ubuntu 14.04

This post documents how to set up an Ubuntu 14.04 64-bit machine with everything you need to develop automated tests with Selenium-WebDriver, Google Chrome, and ChromeDriver, using the Python 2.7 release that ships with Ubuntu.

These steps might be useful to someone in the near term, and perhaps in the future this post could make for an interesting time capsule - remembering the WebDriver that was!

All steps assume you’ve just booted a fresh Ubuntu 14.04 64-bit machine and are at the command prompt:

1. Download and install the latest Google Chrome release

1
2
3
bill@ubuntu:~$ wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb

bill@ubuntu:~$ sudo dpkg -i --force-depends google-chrome-stable_current_amd64.deb

2. Download and install the latest amd64 chromedriver release

Here we use wget to fetch the version number of the latest release, then plug the version into another wget invocation in order to fetch the chromedriver build itself:

1
2
LATEST=$(wget -q -O - http://chromedriver.storage.googleapis.com/LATEST_RELEASE)
wget http://chromedriver.storage.googleapis.com/$LATEST/chromedriver_linux64.zip

Symlink chromedriver into /usr/local/bin/ so it’s in your PATH and available system-wide:

1
unzip chromedriver_linux64.zip && sudo ln -s $PWD/chromedriver /usr/local/bin/chromedriver

3. Install pip and virtualenv

Using virtualenv allows you to install the Selenium Python bindings (and any other Python modules you might want) into an isolated environment, rather than the global packages dir, which (among other benefits) can help make your test environment easily reproducible on other machines:

1
2
3
4
5
6
bill@ubuntu:~$ python -V
Python 2.7.6

bill@ubuntu:~$ sudo apt-get install python-pip

bill@ubuntu:~$ sudo pip install virtualenv

4. Create a dir in which to install your virtualenv environment, and install and activate a new env

More documentation on what’s being done here is available in the virtualenv docs.

1
2
3
4
5
bill@ubuntu:~$ mkdir mytests && cd $_

bill@ubuntu:~/mytests$ virtualenv env

bill@ubuntu:~/mytests$ . env/bin/activate

5. Install the Selenium bindings for Python

1
2
3
4
5
6
7
(env)bill@ubuntu:~/mytests$ pip install selenium
Collecting selenium
  Downloading selenium-2.44.0.tar.gz (2.6MB)
      100% |################################| 2.6MB 1.8MB/s 
      Installing collected packages: selenium
        Running setup.py install for selenium
        Successfully installed selenium-2.44.0

6. Launch Python in interactive mode, and briefly ensure you can launch a browser with ChromeDriver

Once the browser is open, navigate to www.google.com and print the document title:

1
2
3
4
5
6
7
8
9
10
    (env)bill@ubuntu:~/mytests$ python
    Python 2.7.6 (default, Mar 22 2014, 22:59:56)
    [GCC 4.8.2] on linux2
    Type "help", "copyright", "credits" or "license" for more information.

    >>> from selenium import webdriver
    >>> d = webdriver.Chrome()
    >>> d.get("http://www.google.com/")
    >>> d.title
    u'Google'

That’s all you need to get started - the next step I would suggest is to explore how to run Selenium scripts using pytest or unittest. That sounds like good territory to cover in a subsequent post, so perhaps I’ll revisit it!

Automatically Capture Browser Screenshots After Failed Python GhostDriver Tests

GhostDriver is a fantastic tool, one which I’ve been happily using for a while now (and have briefly written about before).

I feel it’s worth mentioning that troubleshooting GhostDriver tests can seem like a challenge in and of itself if you’re used to having a browser GUI to help you visually pinpoint problems in your tests.

This post describes a technique intended to make GhostDriver troubleshooting easier: How to capture a screenshot automatically if your test raises an exception.

Just as in this blog post by Darrell Grainger, we’ll be using the EventFiringWebDriver wrapper to take screenshots after test failures; but here we’ll be using the Python WebDriver bindings rather than Java.

On that note, it’s worth linking to the unit test script for EventFiringWebDriver found in the WebDriver Python bindings repo.

Here’s the GhostDriver screenshot demo code - after running it, you should have a screenshot of the google.com homepage left behind in exception.png:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#!/usr/bin/env python

# * Note: phantomjs must be in your PATH
#
# This script:
# - Navigates to www.google.com
# - Intentionally raises an exception by searching for a nonexistent element
# - Leaves behind a screenshot in exception.png

import unittest
from selenium import webdriver
from selenium.webdriver.support.events import EventFiringWebDriver
from selenium.webdriver.support.events import AbstractEventListener

class ScreenshotListener(AbstractEventListener):
    def on_exception(self, exception, driver):
        screenshot_name = "exception.png"
        driver.get_screenshot_as_file(screenshot_name)
        print("Screenshot saved as '%s'" % screenshot_name)

class TestDemo(unittest.TestCase):
    def test_demo(self):

        pjsdriver = webdriver.PhantomJS("phantomjs")
        d = EventFiringWebDriver(pjsdriver, ScreenshotListener())

        d.get("http://www.google.com")
        d.find_element_by_css_selector("div.that-does-not-exist")

if __name__ == '__main__':
        unittest.main()

Installing Sikuli 1.0.1 on Ubuntu 12.04

While working on a stackoverflow answer about Sikuli today, I noted that installing Sikuli on Ubuntu 12.04 isn’t a one-step process - there are a few dependencies that need manual intervention before you even install it.

Here’s the rundown of the steps that worked for me to get a simple Sikuli script working:

1. Install the Oracle JRE

I used version 1.7.0_51:

1
2
3
4
$ java -version
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

Make sure java is in your PATH, or else the Sikuli IDE will have issues.

2. Install OpenCV 2.4.0

1
2
3
sudo add-apt-repository ppa:gijzelaar/opencv2.4
sudo apt-get update
sudo apt-get libcv-dev

Alternatively, you can probably achieve the same by building/installing OpenCV 2.4.0 from source. I went the package route, though.

3. Install Tesseract 3

1
sudo apt-get install libtesseract3

4. Download and launch sikuli-setup.jar

As recommended in the Sikuli install guide, I saved the installer to ~/SikuliX and ran it there as well.

1
2
mkdir ~/SikuliX
cd ~/SikuliX && java -jar sikuli-setup.jar

From there, I selected the “Pack 1” option in the GUI and let setup proceed normally.

5. Launch the Sikuli IDE, create a Sikuli script, and run it.

To launch the IDE, I’m using the command:

1
~/SikuliX/runIDE

If the IDE dies without an error after you try running your script with the Run button in the GUI, running your .sikuli project on the command line may help uncover what’s going wrong.

To do so, you can use the “runIDE -r” option; you’ll hopefully get much more info about the error.

For example, running the project “foo.sikuli” on the command line is as simple as:

1
~/SikuliX/runIDE -r foo.sikuli

Using 7zip in Lieu of GNU Tar on the Command Line

These days I’m accustomed to having the 7z command available on Unix-like systems (thanks to the p7zip project).

On top of that, 7zip is always one of the first utils I install on any Windows machine I work with.

So as an exercise in cross-platform style (or just for the heck of it), I sometimes use 7z instead of tar when working with archive files.

Here’s a list of basic file archiving tasks, with a comparison of how each is tackled with GNU tar versus 7zip:

Compress and archive a directory, preserving paths

Imagine you want to compress and archive the directory “foo/” and its contents:

1
2
3
4
foo/
foo/level1/
foo/level1/level2/
foo/level1/level2/hi.txt

tar

With GNU tar you can create such an archive with:

1
tar czf foo.tar.gz foo

7zip

To create a similar archive with 7zip (specifically, the 7z, 7z.exe, or 7za.exe binaries), use the 7z a command:

1
7z a foo.7z foo

Interestingly, with 7zip you can also omit the name of the archive file to create; this results in an archive file with a .7z extension, otherwise named after the archived dir:

1
7z a foo

Also note that the 7z format is the default archive type created, unless you specify an alternative type with the -t option.

Extract an archive, recreating paths

This is simple enough, and quite similar between the two tools:

tar

1
tar xf foo.tar.gz

7zip

1
7z x foo.7z

Note that the 7z e command (which you may discover before 7z x) will ignore the directory structure inside the archive, and extract every file and dir into your current dir. That behavior will come in handy for a later task.

Determine whether a given file is in the archive

7zip

With 7z, this is pretty straightforward when using the 7z l (list) command combined with the -r (recurse) option:

1
7z l -r foo.7z hi.txt

tar

With GNU tar, there are several ways to approach this task.

You can pass the full path to the file to tar tf, along with the archive file name, and tar will error out if there’s no match inside the archive:

1
tar tf foo.tar.gz foo/level1/level2/hi.txt

Or, if the original, unarchived dir structure is still present on disk, you can pass it to tar d (–diff), and tar will compare the archive with the unarchived dir:

1
tar df foo.tar.gz foo/level1/level2/hi.txt

Note that BSD tar does not appear to have anything like the d/–diff option.

After all is said and done, piping tar t output to grep may be the most suitable option here:

1
tar tf foo.tar.gz | grep hi.txt

Extract a single file from an archive into the current dir

This scenario is interesting, in that the task is noticeably simpler when using 7zip.

Let’s say you want to extract hi.txt from the archive, placing the file in your current dir.

7zip

With 7z, you can use 7z e -r to retrieve the file (in this case hi.txt), even if it’s several levels down in the archive:

1
7z e -r foo.7z hi.txt

tar

With GNU or BSD tar you’ll need to count how many levels deep in the archive’s dir hierarchy your file lives, and pass that number of leading dirs to remove from the output, using –strip-components:

1
tar --strip-components=3 -xf foo.tar.gz foo/level1/level2/hi.txt

Using cURL to Access Bugzilla’s XML-RPC API

Today I had the chance to briefly explore Bugzilla’s API.

I used curl to experiment with the XML-RPC API a bit - in the end I just scratched the surface of what’s possible, but it was interesting nonetheless.

Here are a few examples of things you can do:

Hello World

A nice hello world example for the Bugzilla API is to query your Bugzilla server for its version, as documented a while back on the Pivotal Labs blog.

This example uses Mozilla’s public server at https://bugzilla.mozilla.org.

Note the use of the Bugzilla.version methodName, and the way the output is piped to tidy for indentation and pretty-printing:

1
2
3
4
5
curl --silent --insecure \
  https://bugzilla.mozilla.org/xmlrpc.cgi \
  -H "Content-Type: text/xml" \
  -d "<?xml version='1.0' encoding='UTF-8'?><methodCall><methodName>Bugzilla.version</methodName> <params> </params> </methodCall>" \
  | tidy -xml -indent -quiet

That command should output:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<?xml version="1.0" encoding="utf-8"?>
<methodResponse>
  <params>
    <param>
      <value>
        <struct>
          <member>
            <name>version</name>
            <value>
              <string>4.2.6+</string>
            </value>
          </member>
        </struct>
      </value>
    </param>
  </params>
</methodResponse>

XPath expressions

To reduce visual clutter, and select specific elements, it’s handy to use an XPath expression to extract values you’re interested in.

For example, to select the version value from the above query, you can pipe curl’s output to the xpath command-line program (which appears to ship with OS X):

1
2
3
4
5
curl --silent --insecure \
  https://bugzilla.mozilla.org/xmlrpc.cgi \
  -H "Content-Type: text/xml" \
  -d "<?xml version='1.0' encoding='UTF-8'?><methodCall><methodName>Bugzilla.version</methodName> <params> </params> </methodCall>" \
  | xpath '//name[contains(text(), "version")]/../value/string/text()'

That command should print:

1
2
3
Found 1 nodes:
-- NODE --
4.2.6+

Getting bug data

To take things a step further, here’s another example - looking up the summary and creation_time values of a given bug ID.

The Bug.get method makes this possible, and an XPath expression that prints the text of the bug summary and creation_time values slims down the blob of XML returned by the API call.

This example will return information on bug 9940. Note how the bug ID is passed in the params list:

1
2
3
4
5
curl --silent --insecure \
  https://bugzilla.mozilla.org/xmlrpc.cgi \
  -H "Content-Type: text/xml" \
  -d "<?xml version='1.0' encoding='UTF-8'?><methodCall><methodName>Bug.get</methodName> <params><param><value><struct><member><name>ids</name><value>9940</value></member></struct></value></param> </params> </methodCall>" \
  | xpath '//name[contains(text(), "summary")]/../value/string/text() | //name[contains(text(), "creation_time")]/../value/dateTime.iso8601/text()'

The result should show you bug 9940’s creation date and awesome summary:

1
2
3
4
Found 2 nodes:
-- NODE --
19990715T20:08:00-- NODE --
Bugzilla should have a party when 1,000,000 bugs get entered

Party like it’s 1999!

Note that if your bugzilla server has authentication enabled, logging in via the API is also possible. A cookie can be obtained and used in subsequent requests.

OpenSSL Oneliner to Print a Remote Server’s Cert Validity Dates

Today I wanted to check the notBefore and notAfter validity dates of an SSL cert installed on a remote server.

I immediately wondered if there was an easy way to use the OpenSSL command line tool to accomplish this.

And there is - you just have to pass the output of openssl s_client to openssl x509, and away you go:

1
2
3
4
echo |\
  openssl s_client -connect www.google.com:443 2>/dev/null |\
  sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' |\
  openssl x509 -noout -subject -dates

That command should print the subject, notBefore, and notAfter dates of the certificate used by www.google.com:

1
2
3
subject= /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
notBefore=Jul 12 08:56:36 2013 GMT
notAfter=Oct 31 23:59:59 2013 GMT

I picked up the specifics of how to do this over at the very useful OpenSSL Command-Line HOWTO site. It’s worth reading in depth.

Adding a Fancybox Gallery to a Rails 3.2 App in 5 Steps

I was interested in seeing how quickly one can add a lightbox gallery to a Rails app nowadays.

As it happens, there’s really not much to it, especially when using the fancybox-rails gem.

This post describes how to bring up an existing image viewer app (the “gallery-after” app from the github repo for Railscasts episode # 381), then add fancybox support to it.

Here’s what the end result will look like:

Setting up a Rails app that displays images

  • First order of business: We need a Rails app that displays images so we can fancybox it up!

Rather than create one from scratch, let’s grab an existing app.

As mentioned above, one of the apps from Railscasts episode # 381 will do nicely. To get the files from github:

1
git clone https://github.com/railscasts/381-jquery-file-upload.git

When that completes, cd into the “gallery-after” app dir we’ll be using:

1
cd 381-jquery-file-upload/gallery-after/
  • Note that the app depends on rmagick, and rmagick depends on ImageMagick.

So next, install imagemagick. On OS X, you can use this homebrew command:

1
brew install imagemagick

On Linux distributions, ImageMagick will more than likely be available in your package management system.

  • This is the point where you’d normally do nothing more than type bundle, and the app would be usable in short order.

But I ran into a snag:

On my system, trying to ‘bundle install’ failed on the rmagick gem, during extension compilation, with this error:

1
"An error occurred while installing rmagick (2.13.1), and Bundler cannot continue."

The fix:

I modified gallery-after/Gemfile to make bundler fetch rmagick 2.13.2 - a version of the gem that resolves the install issue:

1
2
# In gallery-after/Gemfile, specify rmagick "2.13.2":
  gem 'rmagick', '2.13.2'

Then:

1
bundle

…and the installation of rmagick should succeed.

Side note: Near as I could tell, the rmagick problem is due to an incompatibility between rmagick 2.13.1 and the latest version of ImageMagick available via homebrew.

And the gallery-after/Gemfile.lock comes configured to install rmagick version 2.13.1, leading to the problem.

  • After your ‘bundle’ command succeeds, configure your sqlite database:
1
bundle exec rake db:setup
  • Launch the app:
1
bundle exec rails s

Point a browser at localhost:3000 and drag-and-drop some image files into your browser window.

This will insert the images into your DB, which will come in handy later so we have something to view in fancybox.

Adding fancybox-rails to the app

  • Stop the running app, and edit your Gemfile. Add the fancybox-rails gem:
1
2
# In Gemfile
gem 'fancybox-rails'

Then tell bundler to install it:

1
bundle

Edit app/assets/javascripts/application.js and add the fancybox line just under the jquery require statement that will already be in the file:

1
2
//= require jquery
//= require fancybox
  • Next, take care of the fancybox CSS file.

Edit app/assets/stylesheets/application.css and add the fancybox line above the require_tree line:

1
2
3
4
5
/*
 *= require_self
 *= require fancybox
 *= require_tree .
 */
  • Now, edit app/assets/javascripts/paintings.js.coffee, and at the end of the file, add the code to initialize fancybox for links that have the class value grouped_elements:
1
2
3
4
5
6
7
8
jQuery ->
  $("a.grouped_elements").fancybox({
      'transitionIn'  :   'elastic',
      'transitionOut' :   'elastic',
      'speedIn'       :   600,
      'speedOut'      :   200,
      'overlayShow'   :   false
  });
  • Almost done!

The last step is to add a gallery link to the paintings partial, where the link’s class attribute value is set to the “grouped_elements” identifier we added to paintings.js.coffee.

Also, the gallery link’s rel attribute value needs to be defined; in fancybox elements with the same rel value are considered part of the same gallery, which enables flipping between the images without having to close the fancybox viewer.

To take care of those steps, edit app/views/paintings/_painting.html.erb and insert the “view in gallery” link shown below, above the existing edit/remove links:

1
2
3
4
5
   <div class="actions">
<%# This is the line to add: -%>
     <%= link_to "view in gallery", painting.image_url, { :class => "grouped_elements", :rel => "zomg_awesome_images" } %> |
     <%= link_to "edit", edit_painting_path(painting) %> |
     <%= link_to "remove", painting, :confirm => 'Are you sure?', :method => :delete %>

That’s all there is to it.

When you restart your Rails app, each image the app displays should now have a “view in gallery” link below it that launches fancybox, with navigation controls to skip from image to image!

Not too shabby for just a handful of extra lines of code.

Using Heroku Postgres as a Free Cloud Database Sandbox

Need a place to experiment with PostgreSQL, but not in the mood to set up the server locally?

Then try out the free dev plan on Heroko Postgres. No configuration or credit card required.

Creating a DB and manipulating it with the psql CLI can be done in just a few steps:

  • If you don’t already have a favorite postgres client, get the psql command-line program.

If you’re using OS X Lion or later, you already have psql; for older OS X installs (or if you want the server binaries too) you can install Postgres via Homebrew with:

1
brew install postgresql

For other platforms, download and install the PostgreSQL binaries for your machine.

If you’re shown a pricing page with plans to choose from, first click “Dev Plan (free)”, then click “Add Database”.

  • To get a convenient command you can copy and paste to launch the Postgres CLI on your local machine, click the name of your database, then click the connection settings button.

You should see something along the lines of:

  • In the menu, click PSQL, and a command will appear (already selected!) that you can copy and paste into your terminal to connect the psql command-line program to your database.

  • That’s it! Assuming psql is in your path, pasting the psql command will put you at an interactive prompt, and you’ll be ready to create tables and experiment as you like.

Here’s an example session, in which a crude music database is created and queried:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ psql "dbname=YOUR_DB_NAME host=YOUR_EC2_HOST user=YOUR_USER password=YOUR_PASS port=5432 sslmode=require"

d34db4d1d34=> \d
No relations found.
d34db4d1d34=>
CREATE TABLE artists (id int, name varchar(80));
CREATE TABLE releases (id int, name varchar(80));
CREATE TABLE recordings (id int, artist_id int, release_id int, name varchar(80));

INSERT INTO artists (id, name) VALUES (1, 'Underworld');
INSERT INTO releases (id, name) VALUES (1, 'Oblivion With Bells');
INSERT INTO recordings (id, artist_id, release_id, name) VALUES (1, 1, 1, 'To Heal');

INSERT INTO artists (id, name) VALUES (2, 'Stars');
INSERT INTO releases (id, name) VALUES (2, 'In Our Bedroom After the War');
INSERT INTO recordings (id, artist_id, release_id, name) VALUES (2, 2, 2, 'The Night Starts Here');

/* Get all recordings of each artist, and show the release */
SELECT rec.name AS recording, a.name AS artist, rel.name AS release
  FROM recordings AS rec
  INNER JOIN artists AS a
    ON rec.artist_id = a.id
  INNER JOIN releases AS rel
    ON rec.release_id = rel.id;

       recording       |   artist   |           release
-----------------------+------------+------------------------------
 To Heal               | Underworld | Oblivion With Bells
 The Night Starts Here | Stars      | In Our Bedroom After the War
(2 rows)