Agriculture-Vision: preventing major losses in agriculture

New research has resulted in the development of Agriculture-Vision, a method of efficiently detecting potential issues with a field of crops.

A research paper by University of Illinois at Urbana-Champaign, Intelinair and the University of Oregon, has introduced deep neural network based algorithms which can detect negative field conditions, allowing farmers to take immediate preventative action against major crop loss.

In order to encourage further research into computer vision for agriculture, researchers have developed Agriculture-Vision. Agriculture-Vision is a large-scale aerial farmland image dataset for semantic segmentation of agricultural patters.

This research used deep neural network based algorithms, this development has been proven to be effective across a wide variety of fields, such as medicine and astronomy. The algorithm used in this study has proven efficient for detecting field conditions in a timely manner.

Key data from Agriculture-Vision

Researchers collected 94,986 high-quality aerial images from 3,432 farms across USA. Each image consists of RGB and near-infrared channels with high resolution. By assessing nine types of field anomaly patterns that were most important to farmers, researchers could ensure the algorithm targets these patterns.

Agriculture-Vision aims to be a publicly available large-scale aerial agricultural image dataset that is high-resolution, multi-band, and with multiple types of patterns annotated by agronomy experts.

According to the paper, this research has proven much more challenging compared to typical semantic segmentation tasks on other aerial image datasets. In order to segment weed patterns in aerial farmland images, the algorithm must be able to identify sparse weed clusters of vastly different shapes and coverages.

“Agriculture-Vision differs significantly from other aerial image datasets in the following aspects: (1) unprecedented aerial image resolutions up to 10 cm per pixel (cm/px); (2) multiple aligned image channels beyond RGB; (3) challenging annotations of multiple agricultural anomaly patterns; (4) precise annotations from professional agronomists with a strict quality assurance process; and (5) large size and shape variations of annotations,” the study says.

Do you want the latest news and updates Innovation News Network? Click here to subscribe, and make sure to stay connected with us. 

Subscribe to our newsletter

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements

Media Partners

Advertisements

Similar Articles

More from Innovation News Network