Summary of the technology
Software that recognizes user-defined objects in aerial images for agriculture and derives quantitative estimators for plant count (N), green area index (GAI), growth stage (GS), deficient leaf area fraction (DLAF), and weeds area fraction (WAF), and potentially further measures. It employs a pre-trained multi-layer neural network model to classify regions of an image. The network is trained based on good examples of distinct types of objects that are provided by experts in aerial photography for agriculture. Afterwards, any provided aerial image or video can be analyzed with the pretrained software trained once.
Description of the technology
The network is trained based on good examples of distinct types of objects that are provided by experts in aerial photography for agriculture. Afterwards, any provided aerial image or video can be analyzed with the pretrained software trained once. Our implementation of the neuronal network model on Graphics Processing Units (GPU) provides a fast inference of the features in the images. The recognized features are added to the original image as an augmentation, the quantitative analysis (N, GAI, GS, DLAF, WAF) is performed on the fly based on a specialized and optimized set of postprocessing filters, and results are presented to the user employing comma-separated value tables. The software has the potential to replace the manual counting and evaluation by eye of aerial photos. Based on the postprocessing including a live statistical analysis employing local density and density correlations, the software may highlight abnormal regions and facilitate the user to find them. Another perspective of the software is the live computational taxonomy and automatic recognition of deficient areas and predict the growth stage. For instance, by applying logical rules on the spatial relationship between structural components the software may distinguish between crops and weed on the fly and automatically without an expert.
Technology Transfer Office