Tuesday, September 22, 2015

Open Tree of Life paper is out!

The flagship paper from the Open Tree of Life project is finally out! The paper culminates three years of work by the participants of the Open Tree of Life project, including me. I share first authorship with my former advisor Stephen Smith, who I worked with alongside with Mark Holder, Joseph Brown, Jonathan Rees, and others to build tools that we used to combine hundreds of phylogenetic trees and source taxonomies into a comprehensive tree of life containing over 2.3 million tips.

The tree is not only browsable online and downloadable in its entirety (for those of you with a lot of RAM), but is also accessible via web services which are already being used to facilitate outside research and provide example data for the development of research pipelines such as Arbor. The machinery we built to construct and serve the tree of life was designed with frequent updates and community participation in mind—new versions of the tree will be released as community feedback and contributions allow us to refine details and improve the accuracy of the relationships among organisms.

Although this project is not the first to approach the challenging task of publishing the tree of life, it is the first to combine available phylogenetic data in a way that makes a single comprehensive phylogenetic hypothesis spanning all life accessible to the public and the scientific community, and we are pretty stoked about what we've accomplished. If you're interested, you can read more about the tree in this press release from the University of Michigan and this article from the Christian Science Monitor, or for more details check out our recent AMA on reddit or browse our open source software repositories here.

Wednesday, September 2, 2015


I recently read a blog post by Google about "inceptionism" where they describe a way to gain insight into the behavior of neural networks for image recognition by asking the networks to show them what they "see" in images of random noise and arbitrary scenes. The results of this process are called "deep dream" images for short, and they are fascinating but also more than a litte freaky. Popular science called them nightmares, and I don't disagree.

Actually the images really remind me of the visions I used to have as a young child when I'd rub my eyes really hard just before going to sleep. Strange fractalized swirling noise with recognizable objects blending together in infinite spinning repetition. They freaked me out then too, but I couldn't help myself. I'm not sure whether it's exciting or terrifying to think that Google's deep image learning tools have apparently managed to mimic the human brain so accurately that they can even mimic the patterns we see where none really exist.

Search google for more inceptionist images.