« Mighty oaks from little acorns grow. | Main | PREPRINT: Deep neural networks are robust to weight binarization and other non-linear distortions »

May 23-26, 2016: Boot Camp Reunion

Last year, from August 3 to August 20, 2015, IBM Brain-inspired Computing Team held a 3-week Boot Camp. Now, almost 9 months after, we held a Boot Camp Reunion from May 23 to 26, 2016 that brought together 64 attendees.

It was incredible to see the results from attendees and gratifying to see them productive on the next-gen Ecosystem, and achieve state-of-the-art results, within a matter of hours.

The following are three perspectives from my colleagues, Ben G. Shaw, Hartmut E. Penner, Jeffrey L. Mckinstry, and Timothy Melano.Don't miss the attendee comments at the bottom of this blog entry!

Developer Workshop
Photo Credit: William Risk

Line Separator


Guest Blog by Ben G. Shaw on Attendees.

We carefully selected and vetted all attendees.

The following institutions are return attendees:

  • Air Force Research Lab, Rome, NY and Dayton, Ohio
  • Arizona State University
  • Army Research Lab
  • Georgia Institute of Technology
  • Lawrence Berkeley National Lab
  • Lawrence Livermore National Lab
  • National University of Singapore
  • Naval Research Lab
  • Pennsylvania State University
  • Riverside Research
  • Rensselaer Polytechnic Institute
  • SRC
  • Syracuse University
  • Technology Services Corporation
  • University of California, Davis
  • University of California, Los Angeles
  • University of California, San Diego
  • University of California, Santa Cruz
  • University of Dayton
  • University of Pittsburgh
  • University of Tennessee, Knoxville
  • University of Ulm
  • University of Western Ontario
  • University of Wisconsin-Madison.
In addition, the following institutions are new attendees:
  • Department of Defense
  • Johns Hopkins University, Applied Physics Laboratory
  • Mathworks
  • MITRE Corporation
  • Oak Ridge National Laboratory
  • Pacific Northwest National Lab
  • RWTH Aachen & FZ Juelich / JARA
  • Sandia National Laboratories
  • Technical University of Munich
  • University of Florida
  • University of Notre Dame

Line Separator


Guest Blog by Hartmut E. Penner on Docker Infrastructure.

For the Bootcamp Reunion, we needed to create an environment where participants could use our latest release. From past experience, it was clear that even with the best possible installation instructions, it would not be possible to cope with the multitude of systems of the participants in the given time we had and we wanted to spend precious Reunion time exploring the new Eedn programming environment. And beside, the training required server software with specific high end GPUs to do efficient training in the short time.

To be able to support all this and minimize the installation time and effort necessary we decided to use the IBM SoftLayer Cloud to provide the system with the GPUs and Docker as a way to package the software. The SoftLayer Cloud provides so called Bare Metal Server with GPUs which gives the user full hardware access to the system all the way to a hardware console over a web interface. This Bare Metal Server we ordered came pre-installed with Ubuntu Linux 14.04.

Docker as a container technology allows to package an application with all it dependencies like runtime, system tools, system libraries and the code itself. Unlike virtualization technology, all instances share the same kernel and therefore have much lower resource consumption while still providing full isolation between them. Details are here.

For the setup of our software, we needed to have access to the GPU in a shared fashions to limit the amount of hardware. Fortunately, Nvidia provided a full version of a dockerized environment which could be used as a nucleus to build a specialized Docker container with all our Software and access to the GPUs from multiple container instances. The container image was based on the CentOS 7 version or the cuda:7.5-devel image, added with a VNC server, MATLAB, matconvnet and our TrueNorth software release. Each participant got an instance created with exactly the same software with only differences being: an user specific persistent storage, ssh port and public authorized SSH key. With that each user was able to connect to their instance and upload data, from any kind of Windows, MacOS or Linux laptop. VNC traffic was tunneled over SSH, ensuring authentication and encryption of all traffic between the local user and the Cloud instance. For the uploading of a model and running test data on NS1e we chose HTTPS protocol between the cloud instances and the local gateway server, ensuring that this traffic also could not be compromised.

The installation we had finally was built on 7 Bare Metal Servers, each with two NVIDIA K80 GPUs and 16 NS1e boards. On those systems we had up to 80 simultaneous users accessing the instances over WIFI, with parallel training and testing on the NS1e board. The only problem with this installation was the public / private key handling with tunneling, due to its different handling between Windows and Linux/MacOS laptops. After passing this hurdle the installation ran very smoothly without any major issues and provided an environment to concentrate on the task of learning the new software and not dealing with installation and incompatibilities.

It was beautiful to see 80 people simultaneously using the infrastructure!

Line Separator


Guest Blog by Jeffrey L. Mckinstry and Timothy Melano on Developer Workshop.

On first day, May 23, IBM Team presented new results on convolution networks, NS1e16 system, and NS16e system. This was followed by exciting new results from many of the participants -- there are now 16 publications from Boot Campers!

From May 24-26, we ran a hands-on Developer's Workshop in which the attendees were able to use for the first time our Energy efficient Deep Neuromorphic (Eedn) networks on their own data and run these networks on TrueNorth. The reunion had a variety of sessions ranging from theoretical deep dives into the mathematical abstractions of how we have mapped convolutional networks and backpropagation onto TrueNorth, to hands-on tutorial sessions. After a few hours of guided Eedn exercises, we took the training wheels off and let the students use new datasets to create state-of-art neural networks. By the end of the day students had learned how to use the tools from our latest software release and were launching overnight training runs.

The next morning, May 25, students were excited to share very competitive scores on their datasets. The energy was great. Our guests from the Air Force Research Lab were our first success of many. On a aerial radar dataset their scores were comparable to the performance they were getting on other unconstrained networks, the difference in this case was that they were classifying images at 1000 frames per second, while minuscule amount power!

Later in the day our friend, Diego Cantor from the University of Western Ontario, was getting better results on an Eedn network than on his unconstrained Caffe networks! His data story is actually quite entertaining because his original image sizes were too big to be efficiently fed into a single TrueNorth chip. He was quite skeptical when we asked him to downsample his images to 32x32, but was later shocked to see that with smaller images and a sparse Eedn Convnet he was able to beat his Caffe convnet, a hat tip to brain-inspired computing indeed. His data is of ultrasound images of the human spinal column.

Developer Workshop

There were lots of other successes such as Tobi Brosch achieving 87 percent on an 14 category action recognition dataset and Garrick Orchard achieving 98.7 percent accuracy on spiking MNIST dataset; but the real success was that all of the researches in attendance were training Eedn networks and visualizing high accuracy results very quickly.

Developer Workshop

Line Separator


Some Attendee Comments.

"I can make networks in a day now :)"

"I truly appreciate all of the effort from the IBM staff to put these events together; they are immesurably beneficial for this community. Not only do we gain technical insight, but we have the opportunity to reconnect as a community. I thought the use of docker was inspired. It really simplified the process of baselining all participants without the need for complex, time consuming installations. I plan to emulate this process for my lab server. I loved having the bootcamp reunion at Almaden. What a truly beautiful and inspiring location to work and gather as a community. ... What I like most of all is how welcoming IBM is to all of us. I think I speak for this entire group when I say that you always make us feel like we are all part of something important."

"The docker setup that provided an existing installation was genius. That helped us focus our time and attending on learning Eedn instead of the installation."

"The reunion has been great - as was last year's BootCamp. An excellent hands on tutorial for training and deploying CNNs on TN. I feel that I could train my own networks, and this is huge for me because I do not come from a formal machine learning background. I applaud the IBM team for putting together another great workshop."

"The technology is impressive, from our perspective its ideal for robotic applications. We are excited to get this back to our lab and put this on our robotics platform to see how well this operates in our environments. ... The reunion, the format, and the hosting has all been absolutely outstanding. It's nice to have a lot of IBM folks around that have been responsive to all of our questions and concerns. You all clearly spent a lot of time preparing tutorials, documentation, etc. and this preparation has helped and is well appprecaiated."

"The fact that participants were able to get get up and running so quickly (and complained so little) reflects very positively on the tools and preparation."

"I've really enjoyed the reunion. I wasn't at the bootcamp so this is the first chance I've had to use the tools and found them very well constructed."

"I found this training to be very helpful in understanding how to implement neural networks on the TrueNorth hardware. Contained in only 4 days I think the training had a good balance of theory, applications and hands-on projects. The use of Docker containers to implement MatConvNet was very easy to access and deploy and the background information sent prior to the bootcamp made it easy to get up to speed quickly. What other groups have done with the chip in such a short time was quite impressive."

"Overall the presentations were layed out very well and were conducive to good information flow. The responsiveness of the team to help field any and all questions was well-receivied and invaluable; if someone did not know how to answer a question they actively sought someone on the team who did."

"For a 3.5-day crash course on TrueNorth (compared to a 3-week long boot camp), this is a success and probably the best one can organize for a such short duration workshop. I would like to thank everyone at IBM for making it possible. I was able to work through the tutorials at my own pace, be immersed right away, and customize without starting from scratch."

"It has been interesting and inspiring to see the breadth and extent of work that has been done by BootCamp participants, and to see how the tools have evolved since last summer."

"Excited about the new convolutional network capabilties."

"I want to start by saying thank you to all that put forth so much time and effort into this workshop. I wish I could have been here last year at the first bootcamp, but you all have made it very easy to get up to speed. I like the structure and format at this workshop, it was well conceived and executed."


TrackBack URL for this entry:

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)