At Climate LLC, we are applying deep learning to solve problems across various technical and scientific domains. As my colleague Wei describes in his recent post Some Deep Learnings about Applying Deep Learning, one way we are using deep learning is to identify plant disease in farmers’ fields.
We have found that the same bottleneck can arise regardless of domain--training a neural network is a slow process when you have a model with many parameters, or you have a lot of data. This can limit the ability to iterate quickly. One approach we took to speed up training is distributed training. Our Data Science Platform facilitates analytics across all our Climate Fieldview™ data, so enabling distributed training would allow researchers to create scalable pipelines that go from from raw data to predictions. Data fetching, pre-processing, and machine learning on the same platform also encourages a standard set of practices from researchers on different teams who otherwise would not have shared information.
We integrated the open source package Distributed Keras, created by graduate student Joeri Hermans, into our Data Science platform. Distributed Keras implements several data parallel model training procedures. These procedures speed up computation updates by distributing copies of the weights of a model across multiple nodes. Each worker node trains its own model and feeds back its updated set of parameters to a central set of weights after a set number of iterations.
If updates are sent to the driver asynchronously, a node commits its new weights as soon as it is done computing, rather than waiting for all to finish and averaging the gradients. The result: training is faster because there is no bottleneck of waiting for a slow worker to finish. This also means that some nodes are computing updates based on older parameters. It’s important to note, there is a limit to the amount of nodes that are doing computations before the model starts suffering from performance loss.
The main steps we took to operationalizing Distributed Keras at Climate were:
To benchmark the performance of Distributed Keras against a GPU(s), we trained our Geospatial Science team’s disease identification model. We hoped that Distributed Keras would provide comparable speed-ups, while also being cost effective (in terms of AWS hourly prices), and achieve similar performance to non-distributed training. We trained the model using:
We trained the model on two different CPU node types. With ten workers or more, training is faster than on a single GPU. Test set accuracy suffers between 2-5% compared to the non-distributed GPU. This is a result of the asynchronous training procedure.
Cost effectiveness is also a consideration with distributed training. Does the added cost of training with an extra node offset the advantages we get in training speeds? Does this approach cost more than simply paying for a multi-GPU? We analyzed the monetary cost of training this model using Amazon’s on-demand node prices as well as the spot price for the nodes averaged over three months.
At the time of this experiment, spot prices for the r4 and c4 nodes were consistently around $0.12/hr (the on-demand price is $0.39/hr+). The spot prices for the GPU instances were too volatile to be feasible.
We see that distributed training costs about the same as training on a CPU, but faster, when using spot pricing.The recent EMR 5.10 update also gave us the option of using P2 GPU nodes on our cluster.
We also found the performance is more consistent compared to training on the CPU nodes. This might be because the GPU nodes are faster and thus communicate with the central set of weights more often.
By the time we could train on GPU nodes, the spot price for a single GPU was around $0.32/hr (on-demand price is $0.9/hr). The cost of training varies by only $1. Interestingly, using more workers and training the model faster sometimes ends up costing less.
Distributed training is a valuable tool for us because it fits with the infrastructure that our team uses. Being able to build a pipeline on our Data Science Platform is critical given that data scientists often spend more time collecting and processing data rather than training models. This played a big role in our exploration of Distributed Keras in addition to considerations of training time, performance, and costs.
Join us as we explore and build Deep Learning tools to help all the world’s farmers sustainably increase their productivity with digital tools.