Precision Image Processing and Recognition in Farmwave

by Craig Ganssle

Computer vision has quietly emerged as a major pillar in the foundation of AI. Image analysis has grown dramatically over the past few years and with greater access to the processing power of the Cloud. Object recognition is now astonishingly precise, and now with implementation of deep neural networks, error rates are less than 2%. 

When Dr. Fei Fei Li began constructing the ImageNet database, smart detection of objects was limited to yes / no options. (Yes, hot dog. No, not hot dog.) It took almost 3 years for her team to assemble and organize over 3.2 million images for the initial data set of imagery. Eventually the set ballooned to 15 million images to organize the world’s objects into a language machines can understand. This paved the way for other datasets and refined the speed and efficiency with which machines could learn. Now instead of requiring thousands of images of an object, machines can be taught with a few hundred. 

Tesla’s Sr. Director of Artificial Intelligence Andrej Karpathy stated at a recent Tesla press conference that visual recognition is absolutely necessary for their push into autonomy. Each Tesla you see on the road today depends on deep neural networks interpreting HD footage shot by the 8 onboard cameras in real time. This enables the car to understand the environment it is self-driving in. That technology of rapidly interpreting and understanding the world through visual data while in motion is now here. 

Farmwave has always used image analysis to empower its users. We were the first to leverage image processing algorithms to count the kernels on an ear of corn. This helped growers determine yield when factoring in stand counts. Soon we were tapping into cloud based systems to power our image recognition of diseases on crops. We call this detection tool our CORE (Cloud Optimized Recognition Engine).

As Dr. Li had discovered with ImageNet, bias can easily creep into a data set. Our field imagers needed to start with a clean set and accommodate for changing conditions, lighting and growth stages. Images needed to be verified and tagged properly. It was a daunting task, but as the machine learning grew over time, the complex models also evolved to be dramatically more efficient in identifying diseases.

In building out our image library and improving analysis, we refined proprietary internal tools to accelerate the process of accurate model construction. Farmwave became smarter, and as the database began to grow, we were able to train our identification models to pick up more obscure diseases and pest indicators on multiple crops. Is it grey leaf spot? Or is it bacterial leaf streak? Our neural net finds details the eye can miss. 

Armed with this identification smart tool, Farmwave is now applying these capabilities to other aspects of growing. Our goal is to assist farmers and making better decisions out in the field. We want to help growers make rapid counts out in the field without stopping machinery or breaking out the clipboards and calculators. Farmwave can be that extra set of eyes on the field, providing real-time feedback on precision measurements while the focus is on other tasks. We’re building a virtual toolbox of vision filters to assist at every stage of the growing season. 

While we’ve been teaching Farmwave’s CORE to detect anomalies in leaves, the resolution of cameras has jumped ahead. This opens up tremendous possibilities for where and when scanning for disease can be done. In addition to supporting hardworking agronomists who walk the fields and record their findings, Farmwave can also be implemented as an API into drone systems, remote sensors, automated scout probes, and onboard machinery.

Vertical farming can also benefit from Farmwave’s detection system. Growers can keep an eye on stacks of greens with mounted cameras, or through automated scans with overhead cameras on tracks. This avoids the need for intervention by the grower in hard to access upper levels. 

Farmwave’s digital image processing helps growers gain a more comprehensive picture of their fields. The visual enhancement gives agronomists more tools for ground truthing in order to make faster informed decisions. It’s a clear vision of what's ahead.