My Bedroom Temperature

For a long time, I had a plan to expose data about myself publicly and I came up with this idea of personal dashboard. I launched it and it is live on my home page. Go ahead and take a look. →

I had three motivations that made this idea possible:

  • Data should be about myself and my daily life
  • Data should flow to the system automatically
  • It should look cool

Collecting data

As a data freak, it didn’t take me too long to realize that I actually can collect a lot of data about myself using various APIs. Right now I am collecting data about my daily sleep, weight, places I have been, music I listened to, sports activities and online presence just using APIs.

Places I have been

You can’t just imagine how much meaningful data you have online. I built a system on top of APIs and figured out my data sources as follows:

  • Fitness & Weight: I am using RunKeeper did a great job launching HealthGraph API. since 2011 and tracking all my activities ever since. They did a great job launching HealthGraph API, a centralized graph (like Open Graph of Facebook) which can be used by all fitness apps and devices (e.g Fitbit).
  • Online social presence: At the moment, I’m collecting my daily tweets count, Twitter follower count, Facebook friends and Klout score.
  • Location data: I am an addicted Foursquare user since 2010.
  • Music data: All music I listen to on Rdio on desktop and iPhone is scrobbled to Last.fm and I use this Chrome extension to scrobble automatically when I listen on YouTube, Vimeo etc.
  • Bedroom temperature: I have a Raspberry Pi, TMP 102 temperature sensor attached in my bedroom. In fact this whole system runs on that board.
  • Sleep data: I use Sleep Cycle app on my phone, my Jawbone UP is on the way. Both of them send data to Runkeeper, so this also comes from RunKeeper API.

Songs I listened to over days

And the best part is, you can plug anything in very easily to this system. Here’s how this is running:

The back-end

I wrote the entire system in Python. It is still baby steps but it is doing the job. I use Azure Table Storage, which is a NoSQL database on the cloud. It is negligibly cheap and does not make me think about backing up the data.

Right now, this is my implementation and it is open source. You can plug a MySQL, Redis or SimpleDB implementation in instead of Windows Azure.

I called this storage driver simplegauges. It is open source on GitHub (still under development, not packaged). Basically whatever numeric data you give it, it will store it for the day or hour you have specified. Here is how I save daily follower count of mine every 6 hours:

::python
gauge = gauges.DailyGauge('twitter.followers', datastore)
followers_count = api.me().followers_count
gauge.save(today, followers count)

That simple. After that only one record twitter.followers data exists on the database.

So how the data comes in?

In order to retrieve data from each source, there are method that I call tasks. Here is how the task retrieving my Twitter followers data looks like:

::python
@requires('twitter.consumer_key', 'twitter.consumer_secret',
          'twitter.access_token', 'twitter.access_secret')
def followers_count(gauge_factory, config, logger):
    gauge = gauge_factory('twitter.followers')
    # .. initialize Twitter api here, trivial
    count = api.me().followers_count
    gauge.save(today_utc(), count)

These a few lines of code run every 2 hours to bring data to the database.

This system is called personal-dashboard, it is also open source on GitHub. Basically it is a process that runs all the time, runs these tasks plugged in based on a fixture you define. Config keys you see at @requires comes from a config file you handcraft. I use supervisord to keep this process up and running all the time on my Raspberry Pi.

Data is pushed to the front-end by periodically (every 20 minutes) by collecting data from simplegauges service (it handles all aggregation, missing data points, interpolation etc.) and uploading a file (see this, may expire) to Azure Blob Storage. This .js file containing the report basically uses JSONP technique to push data to browser as JSON.

The front-end

Although I am not really good at front-end design and coding I have something working and saving the day. I prepare these simple and yet delicious charts using purely SVG. The SVG content is generated by D3.js. However I don’t publish that part of the code since it is quite hacky.

It just processes whatever JSON data is in the .js file and renders are SVG chart on the browser. When you use your mouse pointer on the chart, you can see previous data points rather than the latest one shown by default. Touch events on mobile are a mess right now, I could use some help if you’re good at it.

My weight over weeks

Use it yourself

You can install this whole system for yourself. Installation instructions may not be ready right now, shoot me an email if you get stuck somewhere. All this are open source so that you can plug your tasks in, and the cooler part is, you can contribute back so that we all can use.

I do not publish my front-end code but I am pretty sure you can find out a chart library (e.g. Chart.js) to show your data if you would like to. I wrote myself something since I wanted responsive SVGs and fine tuning on what I want.

Right now I have tasks for Twitter, Facebook, Last.fm, RunKeeper, Foursquare, temperature sensor on GitHub. Let me know or send a pull request if you find other data sources you want to plug in.

Twitter followers over days