From charlesreid1

overarching goal

2018/Data Project


current status: resolved to use mongodb

next step goals:

  • run mongodb query in javascript
  • collect more data
  • visualize data for single chart/single time series with D3
  • visualize more data using Grafana

toy problem:

  • use instrumented traveling salesman problem (or rubiks cube code, or some other project euler code)
  • proof of concept use of monitoring
  • greenfield deployment of netdata


  • understand/monitor/understand large complex systems
  • minimize time to set up database, add metrics, visualize, gain insight, repeat

toy problem:

  • instrumented TSP or other

toy problem goal

instrument a code, instrument a node, combine collected data about both

  • pick a computationally intensive problem
  • pick a node platform
  • collect the data - mongodb

instrument a TSP problem code, show a proof of concept of what we want to be able to record/measure/visualize

focus on greenfield deployments - install netdata, collect stats, visualize them

d3 viz

10 visualizations:

more calendars:

  • wikipedia article edit calendars
  • hourly calendar?

back burner:

Fix a shrubbery

Also see D3x10

blog posts


charlesreid1 bot:


bot instrumentation:

  • dashboard, monitoring, statistics, status
  • bot dashboard with grafana


apollo bots:

  • 14/15/16/17
  • incorporate lunar surface dialogue

apollo references:


Bots/New Apollo


Genealogy photos:

  • Photos cropped/organized by family
    • 2011
    • 2017
    • Rename scheme
    • Notes - A2k11
    • Notes - R2k11
    • Notes - A2k17
    • Notes - K2k17
    • Notes - R2k17
  • Send email to fam with link on Dropbox


  • Pauline and Bruce chapters
  • Historical research planning

stretch goal

test backup and restore scripts

automate automate automate

  • use an AWS node or a GC node to actually test the backups, start to finish.
  • creating a second node that's a complete and total backup of the first.
  • in the process of doing this, we're going to have a lot of questions to answer and software to fix.
  • letsencrypt certificate generation.

back burner


monitoring hardware

  • network tap/switch
  • wired router

new router:

  • website with database of embedded dev boards:
  • Banana Pi R2 is designed with built-in switch hardware, so it's intended to be used as a Raspberry Pi for home routers, of sorts. Long term, this would be a good hardware platform.
  • Banana Pi R2 Link: [1]

data streams

data streams:

  • sensor data from a physical sensor (raspberry pi, gpio, radio, sdr)
  • rojo/jupiter log data
  • network log info from bro
  • twitter/news scraping

bokeh viz


interactive dashboards:

  • glue between command-line scripts and visual graphs
  • don't worry about "live" ajax refreshing - just focus on analytics and visualization


data store (db)

the database is the central thread for everything.

get a database solution up and running over the management lan.

minimize friction and time to bring up/explore/check new collection.

Note: minimizing friction mainly just comes down to (a) getting it running, thank you very much docker, and (b) familiarity with syntax. everything else is pretty seamless.

completed data streams

completed data streams:

We're not using Collectd anymore, we're using Netdata

Docker/System Stats is a possible solution to collectd

wiki visualization


  • calendar of edits
  • calendar of character counts of edits

calendar visualization:


More organizing and creating git repositories:

  • data organization for repos dealing with data collection, monitoring, or storing raw data
  • created repos for raw data - census, maps, git, wiki
  • these are contained in a data-master repo that is cloned directly into htdocs dir
  • data is accessible at

d3 on

data scripts


rojo git script:

  • set up script to pull latest git commit data
  • make csv with commit data
  • push new csv to git data repo
  • tested
  • crontab

jupiter wiki script:

  • set up script to compile latest wiki edit/graph data
  • make csv/json with data
  • push new data to wiki data repo
  • tested
  • crontab