Six Job Postings in the Office of Academic Innovation at the University of Michigan

I am very pleased to announce that the Office of Academic Innovation (AI) is actively recruiting for several new positions. I’ve included some brief information about the roles below. Please share this announcement across your networks.

Academic Innovation is a strategic priority for the University of Michigan. Through curricular innovation, leadership in learning analytics, and digital infrastructure at scale, AI aims to transform the way we educate and engage our residential students.  

Senior Application Architect (apply here)

The University of Michigan is seeking a qualified Application Architect/Software Developer to lead development of tools in the Digital Innovation Greenhouse (DIG). A primary focus for this individual will be leading the development of GradeCraft as well as contributing to the research mission of AI. Working with the world-class faculty and students of the University of Michigan, Academic Innovation is reimagining education through the use of cutting-edge technologies and innovative pedagogical practices. This work is conducted both within and across the three foundational labs of AI: the Digital Education and Innovation Lab (DEIL), the Digital Innovation Greenhouse (DIG), and the Gameful Learning Lab (GLL). (More information here)

Developer, Digital Innovation Greenhouse (apply here)

The University of Michigan is seeking a qualified Developer to play an active role in the software development team in the Digital Innovation Greenhouse, a strategic initiative managed by the Office of Academic Innovation (AI). This new position is an exciting role, working within a startup environment to design and develop digital education-related applications that are aimed at both enhancing the residential education experience and facilitating engaged and personalized learning through collaboration with the world-class faculty of the University of Michigan. This position will work with the developers, designers and scientists in the Digital Innovation Greenhouse, in collaboration with world-class faculty, primarily on the rapidly growing ECoach application. ECoach leverages innovative University of Michigan developed technology to provide tailored communication in combination with course and student data to deliver features, communications and experiences designed to help students navigate their courses and campus experience more efficiently to better results. (More information here)

Developer, Digital Innovation Greenhouse (apply here)

The University of Michigan is seeking a qualified Developer to play an active role in the development of software for the Digital Innovation Greenhouse (DIG). The DIG is one of three labs within the highly collaborative Office of Academic Innovation (AI). The DIG team brings software development expertise, behavioral science, pedagogical knowledge, and a passion for change to making a University of Michigan education more personalized, engaged, and lifelong. DIG is an innovation accelerator, using agile methods to grow education technologies from invention to infrastructure. (More information here)

MicroMasters Program Manager (apply here)

The University of Michigan is seeking a qualified MicroMasters Program Manager to represent the Digital Education & Innovation Lab (DEIL) in a lead program management role to ensure the successful design and development of new MicroMasters programs. Reporting to the Director of the Digital Education & Innovation Lab, the MicroMasters Program Manager is responsible for leading a cross functional effort to launch new strategic initiatives within the Office of Academic Innovation. The program manager collaborates with faculty, academic unit administrators, course teams, instructional designers, instructional technologists, IT staff, digital media specialists and external platform partners to design, develop and deliver Michigan’s MicroMasters programs. Ideal candidates actively embrace challenges, thrive on solving tough problems, demonstrate passion about digital education and hybrid learning and how it impacts the future of higher education, and enjoy working within a dynamic, fast ­paced team environment.  (More information here)

Project Manager, Digital Education Initiatives (apply here)

The University of Michigan is seeking a qualified Project Manager to join a creative, talented and entrepreneurial team within the Office of Academic Innovation. The Project Manager position reports to the Project Facilitation Lead and collaborates with faculty, course teams, instructional designers, learning experience designers, instructional, media specialists, external platform partners, and other project managers to manage the scoping, design, and development of online courses and other learning initiatives at the Digital Education & Innovation Lab. Ideal candidates actively embrace challenges, thrive on solving tough problems, demonstrate passion about digital and blended education and how it impacts the future of higher education, and enjoy working within a dynamic, fast ­paced team environment. This position is an exciting role, working within a startup environment to develop, support and launch new digital learning initiatives through collaboration with the world ­class faculty of the University of Michigan.

Coordinator, Digital Education and Innovation Lab (apply here)

The University of Michigan is seeking a qualified administrative assistant to play an active role in supporting the Digital Education and Innovation Lab located within the Office of Academic Innovation (AI). Reporting to the Director of the Digital Education and Innovation Lab (DEIL), this position is responsible for establishing and maintaining a standard of excellence and outstanding customer service related to the lab.  (More information here)

How Universities Can Grow a Culture of Academic Innovation

This article was originally posted on 10/22/2016 on EdSurge

James DeVaney, Associate Vice Provost for Academic Innovation

Chris Teplovs, Lead Developer for the Digital Innovation Greenhouse (DIG)

Remarkable breakthroughs happen at public research universities everyday, but bridging the gap between early innovation and widespread adoption is a challenge that these institutions know all too well. This is especially the case when it comes to education technology and curricular innovation.

In 2015 the University of Michigan established the Digital Innovation Greenhouse (DIG) as part of the Office Of Academic Innovation—a group charged with fostering a culture of innovation in learning in order to reimagine the 21st century public research university. DIG works with faculty, staff, and student user communities to grow tools to maturity, and establish pathways to scale through collaboration across and beyond the U-M community. With a team of developers, designers, behavioral scientists, data scientists and student fellows, DIG helps translate digital engagement tools from innovation to infrastructure. In its first year of operation, DIG tools were used by more than 22,000 U-M students and will soon be used by more than a dozen institutions.

DIG has received a steady flow of inquiries and visits from peer universities and edtech innovators. They all ask the same question: How do you get from early-stage innovation and R&D to adoption across an an organization as complex as a public research university?

While it would be silly to offer an overly prescriptive recipe that fails to take each institution’s unique context into account, we think we’re onto something that works. We offer our colleagues at peer institutions and edtech companies nine considerations for cultivating innovation on campus and beyond.

1. Establish clear values and guiding principles.

DIG team members codified our approach in a set of guiding principles that articulates our values, commitments and approaches to fostering innovation. These principles include understanding users and creating a minimum viable product, for example, and we apply them to each project. As an agile partner to faculty innovators and academic units, the DIG team consistently navigates new terrain, and these principles and values provide clarity of purpose. (See how we recently applied the principles to a writing tool called M-Write.)

2. Be impractical. Then consider constraints.

We establish audacious goals in order to transition new digital engagement tools from innovation to infrastructure. Worrying over questions about culture, data and technology could easily constrain our thinking from the start. In order to scale tools to tens of thousands of users within the first 12 months we need freedom to think impractically.

We’ve been fortunate to attract a team with unique talents to cultivate these projects. They are comfortable with the ambiguity inherent to such an endeavor, and understand there is an appropriate time to layer constraints back into our design-thinking approach in order to address implementation and scale proactively.

3. Build a dynamic team.

DIG started with three lead developers who took on a unique set of projects. Part of our mission is to actively share what we learn across campus in order to stimulate new innovative experiments. As we shared our work, new innovators came forward and we quickly realized we needed additional talent to support them. The DIG team identified and prioritized additional needs and quickly added members with expertise in user experience design, behavioral science, data science, software development and innovation advocacy. We continue to grow our capabilities in these areas and more as we foster a culture of learning in innovation at U-M.

4. Welcome talented student contributors.

In addition to growing our full-time staff, we benefit from the engagement of students through our Student Fellows program. Over the course of a year, we hire approximately 20 undergraduate and graduate student fellows who are both mentored in and contribute to areas as diverse as software development, user experience design, graphic artistry, innovation advocacy and data science. Our experiences with our students have helped us to validate and prioritize new capabilities needed to grow our model, including UX design, software development in the MOOC space, and software development aligned with gameful learning.

5. Design a model for agile development that leverages opportunities for discovery and scale.

As a public research university with more than 40,000 students, U-M is one of the largest living laboratories for conducting experiments in academic innovation. By coupling rapid development and deployment with assessment we foster a virtuous cycle of innovation that leads to further discovery. These assessments take a variety of forms ranging from one-on-one interviews with end-users to using techniques from the emerging field of learning analytics.

6. Build products with—rather than for—users.

Instead of ROI, we measure our success by community engagement and educational impact.

As we build minimum viable products with faculty innovators and teams, we move quickly to create learning communities. This results in greater impact on campus and often accelerates knowledge sharing and adoption as well as our due diligence around options for commercialization. We build our tools with our community of users, not simply for them. This attention to community engagement in addition to adoption separates our approach from many off-campus models.

7. Recognize exit as opportunity and not a four-letter word.

We are also committed to ensuring that viable projects thrive once they leave DIG. From the earliest stages, exit plans and concomitant hardening-off plans are developed for all projects that enter the Greenhouse. In some cases the strategy involves commercialization; in other cases it could involve shifting responsibility to our information and technology services group; in a smaller number of cases it could involve closing the project or narrowing development around a particular use case. We find that our process of prototyping and iteration allows us to extract meaning from all projects and continue moving forward in pursuit of our mission.

8. Embrace emergence and continue to strengthen capabilities as new opportunities emerge.

The process of product development in lean startup environments and other highly innovative organizations often incorporates the notion of a “pivot,” which is a “structured course correction designed to test a new fundamental hypothesis about the product, strategy, and engine of growth.” We embrace the concept of pivoting in DIG but we also engage in continuous assessment of our own capabilities, identify new needs, and respond by strategically hiring highly qualified personnel into new roles.

9. Provide links between research and practice.

Our work in DIG is both scholarly and practical. Much of the higher education sector draws a line between teaching and learning on one side, and research on the other. Our work straddles these worlds with a more unifying focus on discovery. As an example, in creating ECoach we are focused on easing the transition to college for all students. We developed this innovative tool, accelerated adoption across campus, and built upon this deployed effort to attract a National Science Foundation $1.9 million grant to further explore how personalization can advance equity on campus.

The approach we have adopted within DIG and across the Office of Academic Innovation—applying lean startup principles to promote academic innovation at a research university—is to the best of our knowledge, a novel one in the educational technology space. We see great promise in a scholarly and practical approach that aligns academic excellence, inclusion and innovation at its core.

Biking to Work (when you live 50 miles away)

I love working at the University of Michigan.  The only downside is that I’m faced with a daily 50 mile (75km) commute in my car each way.  For about a year I drove to the nearest parking garage, which was about 1 block away from my office, parked, walked the single block to my office, stood or sat all day and then did the whole thing in reverse to get home.  I was getting pretty cranky (and heavy) with the lack of exercise and whereas I got myself a gym membership it was difficult to commit to working out.  Sure, when things weren’t crazy at work it was possible to sneak in a workout now and then but when things picked up the first thing to go was fitness.

I decided it was time for a change, so this past Spring I started commuting from North Campus to my office on Central Campus.  The setup at North Campus was a covered bike rack that offered some protection from the elements but not a lot of protection from theft.  I wound up packing my bike in my car and taking it home for the weekend and returning with it in my car on Monday morning.  This was not ideal for a number of reasons not the least of which was always needing to explain to the Canadian Border Guards that that was my old commuter bike and not a new purchase.

One day as I was heading home after locking up my bike at North Campus I looked over and saw a sign advertising bike lockers at the Plymouth Rd. Park-and-Ride.  I was ecstatic! I had rented a bike locker when I was working at the Ontario Institute for Studies in Education in Toronto.  This would be a slightly different setup: rather than using the locker to lock up my bike during the day I would use it as storage during the evening and weekend.   I immediately signed up for a locker, got a key within a couple of days, and I was all set to go. bike20locker20sign20pic

Now this past summer was difficult for a number of reasons so my biking was hit and miss.  I’d say I used my bike about 50% of the time.  I’ll be looking to increase that as much as possible until the bad weather hits.  Once it does, I have the option of hopping on a bus to get me from the Park and Ride to campus.

So right now my typical schedule looks something like this:

5:15am Rise & Shine
5:45am Leave
6:05am Border (NEXUS lanes open at 6am)
6:35am Park & Ride, then cycle
7:00am Office
… (work, eat, work, workout, work) …
5:30pm Leave office, via bike
6:00pm Park & Ride
7:00pm Home

In all, I get about 55 minutes of riding in, plus I try to get in about 45 minutes of either swimming, rowing, or running at the gym.  I’ve also found that it’s much handier to have a bike in downtown Ann Arbor than a car, so that’s a bonus.

I work this schedule for three days a week, then I leave a bit early one day to take my oldest daughter to music lessons and I teach on the remaining day.  It’s challenging but very rewarding.  We’ll see how long this lasts!

It’s Great To Be a Wolverine!

As of this past Wednesday, I’ve been at the University of Michigan’s Digital Innovation Greenhouse (DIG) for exactly six months now.  Time flies!


All three lead developers (Ben Hayward, Kris Steinhoff, and myself) started on the same day in May.  It turns out the hiring committee made a good choice: not only do we have a diversity of complementary talent but we also work exceptionally well as a team.  We breezed through Tuckman’s “forming”, “storming”, and “norming” stages of group development and quickly arrived at the “performing” stage where we’ve remained to this day.

I remember the early whirlwind of “onboarding”: I was the only developer who was new to the University of Michigan so I had a bit more to learn about the internal goings on of this huge institution.  But everyone at the Office of Digital Education & Innovation (where the Digital Innovation Greenhouse is officially “housed”) was extremely helpful and accommodating.  It feels more like an extended family than a workplace.

Shortly after we started, we were joined by three very talented summer interns.  One intern had just graduated from the Human-Computer Interaction program in the School of Information and contributed some fantastic User Experience Design work to a number of our projects.  Unfortunately, she had to leave at the end of the summer to pursue opportunities elsewhere, but the other two interns were able to stay on and become our first Student Fellows.  Growing from a traditional internship approach, the Digital Innovation Greenhouse Student Fellows program now has 10 active fellows who span user experience design, software development, and innovation advocacy. It is a joy to work with such energetic and smart young people!

DIG was tasked with taking three initial projects to scale:  the Academic Reporting Tools, Student Explorer, and E-Coach were existing “version 1” tools that we are in the process of scaling up to “version 2 and beyond”.  Development of these tools required us to learn the complexities and nuances of U-M’s Data Warehouse, which store (amongst other things) details about student records that are key to the sorts of evidence-based decision support that all of these tools offer.

I took the lead on the Academic Reporting Tools (ART2.0) project and am pleased with the progress we have made.Screen Shot 2015-11-19 at 10.52.02 PM  We rolled out a beta within 13 weeks of our start date and we are now in the process of phasing in beta tests in the form of user research.  I can honestly say that the work I’ve produced for ART2.0 represents some of my best work to date and I’m proud of what we as a team have accomplished with it so far.  I also appreciated the wisdom and dedication not only Prof. Gus Evrard, the faculty innovator behind ART2.0 but also of the entire ART2.0 Steering Committee, without whom we would not have been able to come as far in so little time.

In addition to working with the wonderful and supportive faculty, staff, and students that make up the Office of Digital Education and Innovation, highlights of our journey so far include filming segments about each of these tools for the Practical Learning Analytics MOOC, running a Data Hackathondatahackathon, starting the process of hiring a User Experience Designer, and taking on more direct and advisory roles in other projects around campus.  Co-location of the lead developers and Student Fellows is an important component of our success, as are open lines of communication within DIG and DEI (we have weekly DIG status meetings so we know what’s happening more broadly; bi-weekly check-ins with our immediate supervisor; and monthly one-on-one meetings with our Director of Operations where we can talk about how we’re making progress on any number of personal and institutional fronts).  We are well-positioned to grow over the foreseeable future.

People are often stunned when they learn that I commute from Windsor, Ontario every day.  It typically takes me about an hour each way — not altogether different from the sorts of commutes that I experienced when I lived in Toronto.  dwtunnelI am fortunate in that I have the option of working from home in the case of inclement weather but, frankly, I enjoy the energy of DEI so much that I would rather be physically present.  And honestly, as a father of two young and highly energetic girls I enjoy the quiet time that the drive to and fro’ affords me.  One thing that I’m not used to, however, is the seated commute in a car.  When I was commuting in Toronto I was usually on my bike.  The MHealthy program at U-M really helps in this regard.  The recent team challenge to stay active provided the incentive for me to get a FitBit. Screen Shot 2015-11-19 at 10.30.04 PM It’s not always a great feeling to see just how little activity I’ve done in a day but it sure is an eyeopener and it help keep motivation high to keep on movin’.  I found other aspects of the MHealthy program to be helpful too, like the free exercise consultation.  Now that I’ve settled into a groove with work, perhaps I can take some of my energy to focus on my own health and well-being.

So with (American) Thanksgiving almost upon us, this chance to reflect on my first six months at the University of Michigan has highlighted just how much I have to be thankful for!

Joining The Digital Innovation Greenhouse at the University of Michigan

I have been dancing between academic work and entrepreneurship for the past several years.  It has been exhilarating and exhausting.  On the one hand, I have done some of my best and innovative work as a “lone wolf”, acting as an expert consultant to several world-class institutions.  On the other hand it has been a tremendously isolating experience: working solo is tough!

I recently learned of an initiative launched at the University of Michigan called the Digital Innovation Greenhouse (DIG).  I was immediately intrigued by its mission, which is to take educational technology innovations developed by researchers within the institution and grow them to scale for widespread adoption.   James DeVaney, the assistant vice provost in charge of the Office of Digital Education and Innovation (DEI), spoke about the DIG as well as the other labs associated with DEI in a recent interview with Campus Technology.

Earlier this month I accepted an offer to join the DIG team as one of the Lead Developers and I will be starting next week.  This is truly a dream job for me. It allows me to capitalize of virtually all of my experiences to date: I can draw on my experiences as a student, a faculty member, a systems engineer, a software developer, a learning analytics researcher, and an entrepreneur.

For now we will continue to live in Windsor, which will allow my partner to continue her work as a faculty member in the Office of Open Learning at the University of Windsor and will allow us at least the temporary facade of stability.  The commute is an easy one and actually consumes less time than my commuting life in Toronto.  There will be some significant changes in the home life routine: I won’t be on daddy duty quite as much any more (thankfully we have found a most excellent French nanny to help out with child care).

The next few years will, no doubt, bring many challenges and rewards.  I am very excited about this next chapter in my life!

The Entrepreneurial Postdoc

As I look back over the past three years, I see that whereas I had thought I was leaving academia and embracing enterprise, what actually happened was something altogether unexpected.  Instead of focusing on product development, marketing it, and making lots of money I have remained squarely planted inside the research sphere.

In fact, I have followed a course that is typical of a post-doctoral research fellow: expanding on my skills by applying them to new settings while learning new techniques that complement my existing knowledge and expertise.  In other words, I have unwittingly completed a three-year self-funded postdoc!  All of the projects on which I have worked have involved working with a renowned university faculty member.  I have written grant proposals (some of which were funded), acted as a co-principal investigator on projects, honed my skills, advised students, and taught.  I have co-edited a book, published peer-reviewed articles, and presented my work at conferences.

One thing that I hated about my experiences as a self-funded researcher is the uncertainty about funding: I find it exceedingly difficult to focus on doing great work when I’m worried about where my next paycheque will come from. The feeling is somewhere between “being hungry” and “running scared”.  I worry that this sentiment is shared amongst most researchers. Through it all, though, I didn’t bail on my dream of doing leading-edge research and development.  Yes, there were times when I wanted to just quit, and yes, I did go to several interviews that very well could have led to a stable but mundane existence.  There are still days when I question my sanity in pursuing the dream, but most days aren’t like that.

I realize full well how lucky I am and I would like to thank those colleagues who bravely funded the various projects that have kept me afloat over the past three years.  What does the future hold?  More of the same, I think.  Whereas I haven’t gotten to market with any products just yet, I know that I have been moving steadily forward.  Although it seems I’m always “halfway closer” to my goals (but never quite there), my advisors and my intuition tell me I’m on the brink of “eureka” on several fronts. I’ve gotten better at knowing what I’m not good at and when to ask for help.

This year, we will see some of the first interns come through Problemshift Inc. and, should things work out as planned, a return to teaching for me.  Combined with a strong commitment to my responsibilities as an advisor on several projects (a non-for-profit organization that teaches leadership through horsemanship, a new graduate-level program for online teaching and learning, and a working group that oversees the development of open-source analytic software),  2015 will be an exciting and busy year!

Happy New Year!

Using dokku to deploy my visualization apps

[Note: this is more of a reminder-to-self at this point but I hope to revise it over the next while to make it more useful to others. I struggled with some of the details that I specify below so I hope this is useful to someone who is contemplating a slightly more complicated dokku setup.]

I was recently deploying a droplet on DigitalOcean (interested in signing up?  use my referral code and we both get some credit!), had planned to use a standard LAMP stack, but got curious about some applications that I didn’t recognize. Worrying that my operations talents were getting (really) rusty, I thought I should invest some time coming up to speed on some of these new goodies. I decided to check out ‘dokku‘. Wow!

Simply put, dokku streamlines the use of linux containers to deploy applications (relatively) painlessly.  There are a lot of great articles available on dokku: a great introduction to dokku written by its creator, and a collection of articles on easy it is to deploy it on a DigitalOcean droplet: here, here, and here.

Part of my work involves creating dynamic, web-based visualizations of organizational networks.  The data that is used to generate these network diagrams comes from a variety of sources: questionnaires, behavioural traces (including the behaviour of participants in online discussion forums from learning management systems like Sakai or Moodle).  I enjoy the work, but I was missing a good deployment workflow and decided to see if I could use dokku to help.  The short answer: yes!

In the past I had used Apache as a web server to serve up static web pages.  I knew that that was a relatively heavyweight solution so I decided to use this opportunity to try out Node.js/Express.  The nice part about going that route (pun intended) is that there are a good number of tutorials on how to get a Node.js application deployed using dokku (see the above links).

Before showing examples of the various configuration files, let me share a bit more about the visualization workflow.  Getting the data from another system is somewhat of an onerous task.  Whereas I can describe the data format that I want, it’s often beyond the programmers at “the other end” to actually assemble that data, so I wind up specifying a generic set of data that I then wrangle and reduce into the correct format.  This lends itself to a scaling problem:  if several hundred users are pounding the visualization tool, it’s simply too expensive to call out to the native system for the raw data.  It’s also not absolutely necessary to have completely up-to-date data:  being 5 or 10 minutes behind is completely acceptable.  To date, therefore, I typically wrote a tiny python script that would retrieve that raw data, wrangle it, write it out to a data file.  Think ‘CGI’.

This represented the biggest challenge for me in my exploration of dokku, but it was solvable!

Let me start by describing the end result:  a dokku solution that is backed by a single-instance MongoDB plugin, with a MongoDB that is written to by a python process and read from via a Node.JS instance.  Uses dokku-shoreman to allow a non-web process to be specified in the Profile.  The python process runs periodically thanks to a distributed task queue (celery), which is also backed by the same MongoDB instance. Because I’m using both python and  node.js, I had to figure out how to use heroku-buildpack-multi to make it all work.

To understand how to get MongoDB working with Node.js, check out the great tutorials by Christopher Buecheler available here and here.

In the end, here’s what my Profile looked like:

worker: celery -A tasks worker --loglevel=info --beat
web: node app.js

The python-related worker thread ( looks more or less like this:

#!/usr/bin/env python

import json
import collections
import itertools
import urllib2
import os
import time
from pymongo import MongoClient
from celery import Celery
from celery.task import periodic_task
from datetime import timedelta
import logging

BROKER_URL = os.environ.get('MONGO_URL') or 'mongodb://localhost:27017/jobs';
app = Celery('tasks', broker = BROKER_URL)

logger = logging.getLogger('tasks_info')' * tasks');

def get_external_data():'starting')'BROKER_URL'))
        print "Retrieving data",time.strftime("%c")"Retrieving data")
        client = MongoClient(os.environ.get('MONGO_URL') or 'mongodb://localhost:27017')
        db = client[os.environ.get('MONGODB_DATABASE') or 'test1']
        collection = db['foobar']
# ... stuff deleted
                post = {'...'}
        print "Retrieved data",time.strftime("%c") "Retrieved data.")

The @periodic_task line tells celery to run that method every 5 minutes. And every time that task is run, the contents of the MongoDB are replaced. Here’s my file:

import os

CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
host = os.environ.get('MONGODB_HOST') or ""
port = os.environ.get('MONGODB_PORT') or 27017
database = os.environ.get('MONGODB_DATABASE') or "jobs"

    "host": host,
    "port": port,
    "database": database,
    "taskmeta_collection": "stock_taskmeta_collection",

Celery gets upset about various things, hence the need to specify CELERY_ACCEPT_CONTENT. It also gets upset that it’s running as root, so you need to update your .profile.d/ file so it contains the line:

# to run celery as root
export C_FORCE_ROOT=1

Always remember to use virtualenv when developing python code that will be deployed somewhere else, which will then allow you to do a ‘pip freeze’ to generate the contents of requirements.txt. Here’s what mine looks like:


And here’s the actual series of shell commands that I used to set that up:

virtualenv venv
source venv/bin/activate
pip install celery
pip install pymongo
pip install -U celery-with-mongodb
pip freeze > requirements.txt

On the node.js side of things the code is pretty close to what you see in the tutorials pointed to above:

var express = require('express');
var path = require('path');
var favicon = require('static-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');
var compress = require('compression');

var mongo = require('mongodb');
var monk = require('monk');
var db = monk(process.env.MONGO_URL || 'localhost:27017/test1')

var livedata = require('./routes/livedata');

var app = express();

// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'jade');

app.use(express.static(path.join(__dirname, 'public')));

app.use(function(req,res,next) {
    req.db = db;

app.use('/livedata', livedata);

/// catch 404 and forwarding to error handler
app.use(function(req, res, next) {
    var err = new Error('Not Found');
    err.status = 404;

/// error handlers

// development error handler
// will print stacktrace
if (app.get('env') === 'development') {
    app.use(function(err, req, res, next) {
        res.status(err.status || 500);
        res.render('error', {
            message: err.message,
            error: err

// production error handler
// no stacktraces leaked to user
app.use(function(err, req, res, next) {
    res.status(err.status || 500);
    res.render('error', {
        message: err.message,
        error: {}
app.set('port', (process.env.PORT || 5000));

app.listen(app.get('port'), function() {
  console.log("Node app is running at localhost:" + app.get('port'))
  console.log("MONGODB_DATABASE:" + process.env.MONGODB_DATABASE);

module.exports = app;

In a lot of this code, you’ll see the use of ‘||’ (in the javascript stuff) and ‘or’ (in the python stuff). That’s what allows me to use the same code in testing and production. The MongoDB plugin sets a bunch of environment variables, but I use a much simpler setup on my development machines where MongoDB is simply running normally.

The ‘livedata’ stuff above is the part that uses express to read the relevant data from the MongoDB. Here’s the important part of the livedata.js file:

var express = require('express');
var router = express.Router();

/* GET data from db. */
router.get('/', function(req, res) {
  var db = req.db;
  var collection = db.get('foobar');
  collection.findOne({datakey:req.param("d")},function(e,docs) {
        if (docs) {

module.exports = router;

As I mentioned, because I use multiple buildpacks, I needed to create a .buildpacks file that looks something link this:

Note that I needed to specify an earlier version of the python buildpack: the most up-to-date python buildpack didn’t work with the DigitalOcean dokku droplet.

And finally, here’s what my package.json file looks like:

  "name": "AwesomeApp",
  "version": "0.1.0",
  "private": true,
  "engines": {
    "node": "0.10.x"
  "scripts": {
    "start": "node ./bin/www"
  "dependencies": {
    "express": "~4.0.0",
    "compression": "*",
    "static-favicon": "~1.0.0",
    "morgan": "~1.0.0",
    "cookie-parser": "~1.0.1",
    "body-parser": "~1.0.0",
    "debug": "~0.7.4",
    "jade": "~1.3.0",
    "mongodb": "*",
    "monk": "*"

What this allows me to do is to use git to handle my local revisions, and then to push the appropriate version to whatever number of remote dokku instances I want.  This allows me, for example, to work on a development branch, quickly switch to the production branch to implement a quick fix, deploy that fix, then switch back to the development branch, finish my work, and merge those changes into the master branch right before pushing it out to my dokku instances.  It was the first time I actually felt like I was using branching and merging in git as it is meant to be used!

Here’s what the output from a typical push to the dokku instance looks like:

$ git push dokkuone master
Counting objects: 5, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 365 bytes | 0 bytes/s, done.
Total 3 (delta 1), reused 0 (delta 0)
-----> Cleaning up ...
-----> Building zon ...
remote: Cloning into '/tmp/tmp.zuKTAzx168'...
remote: done.
remote: HEAD is now at 097b997... Things seem to be working; change to a more reasonable logging level
       Multipack app detected
=====> Downloading Buildpack:
=====> Detected Framework: Python
-----> No runtime.txt provided; assuming python-2.7.6.
-----> Using Python runtime (python-2.7.6)
-----> Installing dependencies using Pip (1.5.4)
       Cleaning up...
=====> Downloading Buildpack:
=====> Detected Framework: Node.js
-----> Requested node range:  0.10.x
-----> Resolved node version: 0.10.28
-----> Downloading and installing node
-----> Found existing node_modules directory; skipping cache
-----> Rebuilding any native dependencies

       > bson@0.2.8 install /build/app/node_modules/mongodb/node_modules/bson
       > (node-gyp rebuild 2> builderror.log) || (exit 0)

       make: Entering directory `/build/app/node_modules/mongodb/node_modules/bson/build'
         CXX(target) Release/
         SOLINK_MODULE(target) Release/
         SOLINK_MODULE(target) Release/ Finished
         COPY Release/bson.node
       make: Leaving directory `/build/app/node_modules/mongodb/node_modules/bson/build'

       > kerberos@0.0.3 install /build/app/node_modules/mongodb/node_modules/kerberos
       > (node-gyp rebuild 2> builderror.log) || (exit 0)

       make: Entering directory `/build/app/node_modules/mongodb/node_modules/kerberos/build'
         SOLINK_MODULE(target) Release/
         SOLINK_MODULE(target) Release/ Finished
         COPY Release/kerberos.node
       make: Leaving directory `/build/app/node_modules/mongodb/node_modules/kerberos/build'

       > bson@0.2.7 install /build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/bson
       > (node-gyp rebuild 2> builderror.log) || (exit 0)

       make: Entering directory `/build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/bson/build'
         CXX(target) Release/
         SOLINK_MODULE(target) Release/
         SOLINK_MODULE(target) Release/ Finished
         COPY Release/bson.node
       make: Leaving directory `/build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/bson/build'

       > kerberos@0.0.3 install /build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/kerberos
       > (node-gyp rebuild 2> builderror.log) || (exit 0)

       make: Entering directory `/build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/kerberos/build'
         SOLINK_MODULE(target) Release/
         SOLINK_MODULE(target) Release/ Finished
         COPY Release/kerberos.node
       make: Leaving directory `/build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/kerberos/build'
       express@4.0.0 /build/app/node_modules/express
       parseurl@1.0.1 /build/app/node_modules/express/node_modules/parseurl
       accepts@1.0.0 /build/app/node_modules/express/node_modules/accepts
       mime@1.2.11 /build/app/node_modules/express/node_modules/accepts/node_modules/mime
       negotiator@0.3.0 /build/app/node_modules/express/node_modules/accepts/node_modules/negotiator
       type-is@1.0.0 /build/app/node_modules/express/node_modules/type-is
       mime@1.2.11 /build/app/node_modules/express/node_modules/type-is/node_modules/mime
       range-parser@1.0.0 /build/app/node_modules/express/node_modules/range-parser
       cookie@0.1.0 /build/app/node_modules/express/node_modules/cookie
       buffer-crc32@0.2.1 /build/app/node_modules/express/node_modules/buffer-crc32
       fresh@0.2.2 /build/app/node_modules/express/node_modules/fresh
       methods@0.1.0 /build/app/node_modules/express/node_modules/methods
       send@0.2.0 /build/app/node_modules/express/node_modules/send
       debug@0.7.4 /build/app/node_modules/debug
       mime@1.2.11 /build/app/node_modules/express/node_modules/send/node_modules/mime
       cookie-signature@1.0.3 /build/app/node_modules/express/node_modules/cookie-signature
       merge-descriptors@0.0.2 /build/app/node_modules/express/node_modules/merge-descriptors
       utils-merge@1.0.0 /build/app/node_modules/express/node_modules/utils-merge
       escape-html@1.0.1 /build/app/node_modules/express/node_modules/escape-html
       qs@0.6.6 /build/app/node_modules/express/node_modules/qs
       serve-static@1.0.1 /build/app/node_modules/express/node_modules/serve-static
       send@0.1.4 /build/app/node_modules/express/node_modules/serve-static/node_modules/send
       mime@1.2.11 /build/app/node_modules/express/node_modules/serve-static/node_modules/send/node_modules/mime
       fresh@0.2.0 /build/app/node_modules/express/node_modules/serve-static/node_modules/send/node_modules/fresh
       range-parser@0.0.4 /build/app/node_modules/express/node_modules/serve-static/node_modules/send/node_modules/range-parser
       path-to-regexp@0.1.2 /build/app/node_modules/express/node_modules/path-to-regexp
       compression@1.0.2 /build/app/node_modules/compression
       bytes@0.3.0 /build/app/node_modules/compression/node_modules/bytes
       negotiator@0.4.3 /build/app/node_modules/compression/node_modules/negotiator
       compressible@1.0.1 /build/app/node_modules/compression/node_modules/compressible
       static-favicon@1.0.2 /build/app/node_modules/static-favicon
       morgan@1.0.0 /build/app/node_modules/morgan
       bytes@0.2.1 /build/app/node_modules/morgan/node_modules/bytes
       cookie-parser@1.0.1 /build/app/node_modules/cookie-parser
       cookie@0.1.0 /build/app/node_modules/cookie-parser/node_modules/cookie
       cookie-signature@1.0.3 /build/app/node_modules/cookie-parser/node_modules/cookie-signature
       body-parser@1.0.2 /build/app/node_modules/body-parser
       type-is@1.1.0 /build/app/node_modules/body-parser/node_modules/type-is
       mime@1.2.11 /build/app/node_modules/body-parser/node_modules/type-is/node_modules/mime
       raw-body@1.1.4 /build/app/node_modules/body-parser/node_modules/raw-body
       bytes@0.3.0 /build/app/node_modules/body-parser/node_modules/raw-body/node_modules/bytes
       qs@0.6.6 /build/app/node_modules/body-parser/node_modules/qs
       jade@1.3.1 /build/app/node_modules/jade
       commander@2.1.0 /build/app/node_modules/jade/node_modules/commander
       mkdirp@0.3.5 /build/app/node_modules/jade/node_modules/mkdirp
       transformers@2.1.0 /build/app/node_modules/jade/node_modules/transformers
       promise@2.0.0 /build/app/node_modules/jade/node_modules/transformers/node_modules/promise
       is-promise@1.0.0 /build/app/node_modules/jade/node_modules/transformers/node_modules/promise/node_modules/is-promise
       css@1.0.8 /build/app/node_modules/jade/node_modules/transformers/node_modules/css
       css-parse@1.0.4 /build/app/node_modules/jade/node_modules/transformers/node_modules/css/node_modules/css-parse
       css-stringify@1.0.5 /build/app/node_modules/jade/node_modules/transformers/node_modules/css/node_modules/css-stringify
       uglify-js@2.2.5 /build/app/node_modules/jade/node_modules/transformers/node_modules/uglify-js
       source-map@0.1.33 /build/app/node_modules/jade/node_modules/transformers/node_modules/uglify-js/node_modules/source-map
       amdefine@0.1.0 /build/app/node_modules/jade/node_modules/transformers/node_modules/uglify-js/node_modules/source-map/node_modules/amdefine
       optimist@0.3.7 /build/app/node_modules/jade/node_modules/transformers/node_modules/uglify-js/node_modules/optimist
       wordwrap@0.0.2 /build/app/node_modules/jade/node_modules/transformers/node_modules/uglify-js/node_modules/optimist/node_modules/wordwrap
       character-parser@1.2.0 /build/app/node_modules/jade/node_modules/character-parser
       monocle@1.1.51 /build/app/node_modules/jade/node_modules/monocle
       readdirp@0.2.5 /build/app/node_modules/jade/node_modules/monocle/node_modules/readdirp
       minimatch@0.2.14 /build/app/node_modules/jade/node_modules/monocle/node_modules/readdirp/node_modules/minimatch
       lru-cache@2.5.0 /build/app/node_modules/jade/node_modules/monocle/node_modules/readdirp/node_modules/minimatch/node_modules/lru-cach
       sigmund@1.0.0 /build/app/node_modules/jade/node_modules/monocle/node_modules/readdirp/node_modules/minimatch/node_modules/sigmund
       with@3.0.0 /build/app/node_modules/jade/node_modules/with
       uglify-js@2.4.13 /build/app/node_modules/jade/node_modules/with/node_modules/uglify-js
       async@0.2.10 /build/app/node_modules/jade/node_modules/with/node_modules/uglify-js/node_modules/async
       source-map@0.1.33 /build/app/node_modules/jade/node_modules/with/node_modules/uglify-js/node_modules/source-map
       amdefine@0.1.0 /build/app/node_modules/jade/node_modules/with/node_modules/uglify-js/node_modules/source-map/node_modules/amdefine
       optimist@0.3.7 /build/app/node_modules/jade/node_modules/with/node_modules/uglify-js/node_modules/optimist
       wordwrap@0.0.2 /build/app/node_modules/jade/node_modules/with/node_modules/uglify-js/node_modules/optimist/node_modules/wordwrap
       uglify-to-browserify@1.0.2 /build/app/node_modules/jade/node_modules/with/node_modules/uglify-js/node_modules/uglify-to-browserify
       constantinople@2.0.0 /build/app/node_modules/jade/node_modules/constantinople
       uglify-js@2.4.13 /build/app/node_modules/jade/node_modules/constantinople/node_modules/uglify-js
       async@0.2.10 /build/app/node_modules/jade/node_modules/constantinople/node_modules/uglify-js/node_modules/async
       source-map@0.1.33 /build/app/node_modules/jade/node_modules/constantinople/node_modules/uglify-js/node_modules/source-map
       amdefine@0.1.0 /build/app/node_modules/jade/node_modules/constantinople/node_modules/uglify-js/node_modules/source-map/node_modules/amdefine
       optimist@0.3.7 /build/app/node_modules/jade/node_modules/constantinople/node_modules/uglify-js/node_modules/optimist
       wordwrap@0.0.2 /build/app/node_modules/jade/node_modules/constantinople/node_modules/uglify-js/node_modules/optimist/node_modules/wordwrap
       uglify-to-browserify@1.0.2 /build/app/node_modules/jade/node_modules/constantinople/node_modules/uglify-js/node_modules/uglify-to-browserify
       mongodb@1.4.5 /build/app/node_modules/mongodb
       bson@0.2.8 /build/app/node_modules/mongodb/node_modules/bson
       nan@1.0.0 /build/app/node_modules/mongodb/node_modules/bson/node_modules/nan
       kerberos@0.0.3 /build/app/node_modules/mongodb/node_modules/kerberos
       readable-stream@1.0.27-1 /build/app/node_modules/mongodb/node_modules/readable-stream
       core-util-is@1.0.1 /build/app/node_modules/mongodb/node_modules/readable-stream/node_modules/core-util-is
       isarray@0.0.1 /build/app/node_modules/mongodb/node_modules/readable-stream/node_modules/isarray
       string_decoder@0.10.25-1 /build/app/node_modules/mongodb/node_modules/readable-stream/node_modules/string_decoder
       inherits@2.0.1 /build/app/node_modules/mongodb/node_modules/readable-stream/node_modules/inherits
       monk@0.9.0 /build/app/node_modules/monk
       mongoskin@1.4.1 /build/app/node_modules/monk/node_modules/mongoskin
       mongodb@1.4.1 /build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb
       bson@0.2.7 /build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/bson
       nan@0.8.0 /build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/bson/node_modules/nan
       kerberos@0.0.3 /build/app/node_modules/monk/node_modules/mongoskin/node_modules/mongodb/node_modules/kerberos
       mpromise@0.5.1 /build/app/node_modules/monk/node_modules/mpromise
       compress@0.1.9 /build/app/node_modules/compress
-----> Writing a custom .npmrc to circumvent npm bugs
-----> Installing dependencies
-----> Caching node_modules directory for future builds
-----> Cleaning up node-gyp and npm artifacts
-----> Building runtime environment
       Using release configuration from last framework Node.js:
       addons: []
       default_process_types: {}
-----> Discovering process types
       Procfile declares types -> worker, web
-----> Releasing zon ...

-----> foobar linked to jeffutter/mongodb container
-----> Injecting Shoreman ...
-----> Deploying foobar ...
=====> Application deployed:

   0e4dea4..097b997  master -> master

So that’s about all there is to it. It’s kind of nice being able to use a system that weaves together simple things like virtualenv, git, Docker, and buildstep to create a really powerful deployment system that allows me to focus on application development and not on operations! Thank you, Jeff Lindsay! As always, feel free to leave a comment or contact me directly if you have any questions.