Amazon Lookout for Vision is a machine learning (ML) service that spots defects and anomalies in visual representations using computer vision (CV). With Amazon Lookout for Vision, manufacturing companies can increase quality and reduce operational costs by quickly identifying differences in images of objects at scale.
Many enterprise customers want to identify missing components in products, damage to vehicles or structures, irregularities in production lines, minuscule defects in silicon wafers, and other similar problems. Amazon Lookout for Vision uses ML to see and understand images from any camera as a person would, but with an even higher degree of accuracy and at a much larger scale. Amazon Lookout for Vision eliminates the need for costly and inconsistent manual inspection, while improving quality control, defect and damage assessment, and compliance. In minutes, you can begin using Amazon Lookout for Vision to automate inspection of images and objects—with no ML expertise required.
In this post, we look at how we can automate detecting anomalies in silicon wafers and notifying operators in real time.
Keeping track of the quality of products in a manufacturing line is a challenging task. Some process steps take images of the product that humans then review in order to assure good quality. Thanks to artificial intelligence, you can automate these anomaly detection tasks, but human intervention may be necessary after anomalies are detected. A standard approach is sending emails when problematic products are detected. These emails might be overlooked, which could cause a loss in quality in a manufacturing plant.
In this post, we automate the process of detecting anomalies in silicon wafers and notifying operators in real time using automated phone calls. The following diagram illustrates our architecture. We deploy a static website using AWS Amplify, which serves as the entry point for our application. Whenever a new image is uploaded via the UI (1), an AWS Lambda function invokes the Amazon Lookout for Vision model (2) and predicts whether this wafer is anomalous or not. The function stores each uploaded image to Amazon Simple Storage Service (Amazon S3) (3). If the wafer is anomalous, the function sends the confidence of the prediction to Amazon Connect and calls an operator (4), who can take further action (5).
To configure Amazon Connect and the contact flow, you complete the following high-level steps:
The first step is to create an Amazon Connect instance. For the rest of the setup, we use the default values, but don’t forget to create an administrator login.
Instance creation can take a few minutes, after which we can log in to the Amazon Connect instance using the admin account we created.
In this post, we have a predefined contact flow that we can import. For more information about importing an existing contact flow, see Import/export contact flows.
The imported contact flow looks similar to the following screenshot.
Here you can find the ARN of the contact flow.
Claiming a number is easy and takes just a few clicks. Make sure to choose the previously imported contact flow while claiming the number.
If no numbers are available in the country of your choice, raise a support ticket.
The following screenshot shows our contact flow.
The contact flow performs the following functions:
Optionally, you can enhance your system with an Amazon Lex bot.
Now that you have set up Amazon Connect, deployed your contact flow, and noted the information you need for the rest of the deployment, we can deploy the remaining components. In the cloned GitHub repository, edit the build.sh script and run it from the command line:
#Global variables ApplicationRegion=”YOUR_REGION” S3SourceBucket=”YOUR_S3_BUCKET-sagemaker” LookoutProjectName=”YOUR_PROJECT_NAME” FlowID=”YOUR_FLOW_ID” InstanceID=”YOUR_INSTANCE_ID” SourceNumber=”YOUR_CLAIMED_NUMBER” DestNumber=”YOUR_MOBILE_PHONE_NUMBER” CloudFormationStack=”YOUR_CLOUD_FORMATION_STACK_NAME”
Provide the following information:
This script then performs the following actions:
After the stack is deployed, you can find the following resources created on the AWS CloudFormation console.
You can see that an Amazon SageMaker notebook called amazon-lookout-vision-create-project is also created.
In this section, we see how to build, train, and deploy the Amazon Lookout for Vision model using the open-source Python SDK. For more information about the Amazon Lookout for Vision Python SDK, see this blog post.
You can build the model via the AWS Management Console. For programmatic deployment, complete the following steps:
In the instance, you can find the GitHub repository of the Amazon Lookout for Vision Python SDK automatically cloned.
The folder contains an example notebook that walks you through building, training, and deploying a model. Before you get started, you need to upload the images to use to train the model into your notebook instance.
Example images are in the downloaded GitHub repository.
The notebook walks you through the process of creating your model. One important step you should do first is provide the following information:
# Training & Inference input_bucket = “YOUR_S3_BUCKET_FOR_TRAINING” project_name = “YOUR_PROJECT_NAME” model_version = “1” # leave this as one if you start right at the beginning # Inference output_bucket = “YOUR_S3_BUCKET_FOR_INFERENCE” # can be same as input_bucket input_prefix = “YOUR_KEY_TO_FILES_TO_PREDICT/” # used in batch_predict output_prefix = “YOUR_KEY_TO_SAVE_FILES_AFTER_PREDICTION/” # used in batch_predict
You can ignore the inference section, but feel free to also play around with this part of the notebook. Because you’re just getting started, you can leave model_version set to “1”.
For input_bucket and project_name, use the S3 bucket and Amazon Lookout for Vision project name that are provided as part of the build.sh script. You can then run each cell in the notebook, which successfully deploys the model.
You can view the training metrics using the SDK, but you can also find them on the console. To do so, open your project, navigate to the models, and choose the model you’ve trained. The metrics are available on the Performance metrics tab.
You’re now ready to deploy a static website that can call your model on demand.
Your first step is to add the endpoint of your Amazon API Gateway to your static website’s source code.
We add the URL to the HTML source code.
At the end of the file, you can find a section that uses jQuery to trigger an AJAX request. One key is called url, which has an empty string as its value.
The code should look similar to the following:
$.ajax({ type: ‘POST’, url: ‘https://<API_Gateway_ID>.execute-api.<AWS_REGION>.amazonaws.com/dev/amazon-lookout-vision-api’, data: JSON.stringify({coordinates: coordinates, image: reader.result}), cache: false, contentType: false, processData: false, success:function(data) { var anomaly = data[“IsAnomalous”] var confidence = data[“Confidence”] text = “Anomaly:” + anomaly + “
” + “Confidence:” + confidence + “
“; $(“#json”).html(text); }, error: function(data){ console.log(“error”); console.log(data); }});
The front-end environment page of your app opens automatically.
You can enhance this piece to connect AWS Amplify to Git and automate your whole deployment.
After the deployment is successful, you can use your web application by choosing the domain displayed in AWS Amplify.
Congratulations! You just built a solution to automate the detection of anomalies in silicon wafers and alert an operator to take appropriate action. The data we use for Amazon Lookout for Vision is a wafer map taken from Wikipedia. A few “bad” spots have been added to mimic real-world scenarios in semiconductor manufacturing.
After deploying the solution, you can run a test to see how it works. When you open the AWS Amplify domain, you see a website that lets you upload an image. For this post, we present the result of detecting a bad wafer with a so-called donut pattern. After you upload the image, it’s displayed on your website.
If the image is detected as an anomaly, Amazon Connect calls your phone number and you can interact with the service.
In this post, we used Amazon Lookout for Vision to automate the detection of anomalies in silicon wafers and alert an operator in real time using Amazon Connect so they can take action as needed.
This solution isn’t bound to just wafers. You can extend it to object tracking in transportation, products in manufacturing, and other endless possibilities.
Tolla Cherwenka is an AWS Global Solutions Architect who is certified in data and analytics. She uses an art of the possible approach to work backwards from business goals to develop transformative event-driven data architectures that enable data-driven decisions. Moreover, she is passionate about creating prescriptive solutions for refactoring to mission critical monolithic workloads to microservices, supply chain and connected factories that leverage IOT, machine learning, big data and analytics services.
Michael Wallner is a Global Data Scientist with AWS Professional Services and is passionate about enabling customers on their AI/ML journey in the cloud to become AWSome. Besides having a deep interest in Amazon Connect he likes sports and enjoys cooking.
Krithivasan Balasubramaniyan is a Principal Consultant at Amazon Web Services. He enables global enterprise customers in their digital transformation journey and helps architect cloud native solutions.