Amazon Rekognition Custom Labels makes automated weed detection in crops easier. Instead of manually locating weeds, you can automate the process with Amazon Rekognition Custom Labels, which allows you to build machine learning (ML) models that can be trained with only a handful of images and yet are capable of accurately predicting which areas of a crop have weeds and need treatment. This saves farmers time, effort, and weed treatment costs.
Every farm has weeds. Weeds compete with crops and if not controlled can take up precious space, sunlight, water, and nutrients from crops and reduce their yield. Weeds grow much faster than crops and need immediate and effective control. Detecting weeds in crops is a lengthy and time-consuming process and is currently done manually. Although weed spray machines exist that can be coded to go to an exact location in a field and spray weed treatment in just those spots, the process of locating where those weeds exist is not yet automated.
Weed location automation isn’t an easy process. This is where computer vision and AI come in. Amazon Rekognition is a fully managed computer vision service that allows developers to analyze images and videos for a variety of use cases, including face identification and verification, media intelligence, custom industrial automation, and workplace safety. Detecting custom objects and scenes can be hard. Training and improving the accuracy of a computer vision model requires a large amount of data and is a complex problem. Amazon Rekognition Custom Labels allows you to detect custom labeled objects and scenes with just a handful of training images.
In this post, we use Amazon Rekognition Custom Labels to build an ML model that detects weeds in crops. We’re presently helping researchers at a US university automate this process for local farmers.
We solve this problem by feeding images of crops with and without weeds to Amazon Rekognition Custom Labels and building an ML model. After the model is built and deployed, we can perform inference by feeding the model images from field cameras. This way farmers can automate weed detection in their fields. Our experiments showed that highly accurate models can be built with as few as 32 images.
Next, we create a dataset.
For this post, we use 32 field images, of which half are images of crops without weeds and half are weed-infected crops.
For this post, we define two labels: good-crop and weed.
We now have labeled images for both the classes we defined.
We’re now ready to train a new model.
After the model is trained, we can see how it performed. Our model was near perfect, with an F1 score of 1.0. Precision and recall were 1.0 as well.
We can choose View test results to see how this model performed on our test data. The following screenshot shows that good crops were predicted accurately as good crops and weed-infected crops were detected as containing weeds.
We offer an AWS CloudFormation template in the GitHub repo that allows you to test the model through a browser. Choose the appropriate template depending on your Region. The template launches the required resources for you to test the model
The template asks for your email when you launch it. When the template is ready, it emails you the required credentials. The Outputs tab for the CloudFormation stack has a website URL for testing the model.
Inference from the model is also possible using the SDK. The following code runs on the same image as in the previous section:
import boto3 def show_custom_labels(model, bucket, image, min_confidence): client=boto3.client(‘rekognition’) #Call DetectCustomLabels response = client.detect_custom_labels(Image={‘S3Object’: {‘Bucket’: bucket, ‘Name’: image}}, MinConfidence=min_confidence, ProjectVersionArn=model) # Print results for customLabel in response[‘CustomLabels’]: print(‘Label ‘ + str(customLabel[‘Name’])) print(‘Confidence ‘ + str(customLabel[‘Confidence’]) + “n”) return len(response[‘CustomLabels’]) def main(): bucket = ‘crop-weed-bucket’ image = “Weed-1.jpg” model = ‘arn:aws:rekognition:us-east-2:xxxxxxxxxxxx:project/Weed-detection-in-crops/version/Weed-detection-in-crops.2021-03-30T10.02.49/yyyyyyyyyy’ min_confidence=1 label_count=show_custom_labels(model, bucket, image, min_confidence) print(“Custom labels detected: ” + str(label_count)) if __name__ == “__main__”: main()
The results from using the SDK are the same as earlier from the browser:
Label weed Confidence 92.1469955444336 Label good-crop Confidence 7.852999687194824 Custom labels detected: 2
Consider the following best practices when using Amazon Rekognition Custom Labels:
In this post, we showed how you can automate weed detection in crops by building custom ML models with Amazon Rekognition Custom Labels. Amazon Rekognition Custom Labels takes care of deep learning complexities behind the scenes, allowing you to build powerful image classification models with just a handful of training images. You can improve model accuracy by increasing the number of images in your training data and resolution of those images. Farmers can deploy models such as these into their weed spray machines in order to reduce cost and manual effort. To learn more, including other use cases and video tutorials, visit the Amazon Rekognition Custom Labels webpage.
Raju Penmatcha is a Senior AI/ML Specialist Solutions Architect at AWS. He works with education, government, and nonprofit customers on machine learning and artificial intelligence related projects, helping them build solutions using AWS. When not helping customers, he likes traveling to new places.