Defect detection in high-resolution imagery utilizing two-stage Amazon Rekognition Customized Labels fashions


Excessive-resolution imagery could be very prevalent in right this moment’s world, from satellite tv for pc imagery to drones and DLSR cameras. From this imagery, we are able to seize harm attributable to pure disasters, anomalies in manufacturing tools, or very small defects corresponding to defects on printed circuit boards (PCBs) or semiconductors. Constructing anomaly detection fashions utilizing high-resolution imagery will be difficult as a result of fashionable pc imaginative and prescient fashions usually resize photos to a decrease decision to suit into reminiscence for coaching and working inference. Lowering the picture decision considerably implies that visible data regarding the defect is degraded or utterly misplaced.

One strategy to beat these challenges is to construct two-stage fashions. Stage 1 fashions detect a area of curiosity, and Stage 2 fashions detect defects on the cropped area of curiosity, thereby sustaining ample decision for small detects.

On this publish, we go over find out how to construct an efficient two-stage defect detection system utilizing Amazon Rekognition Custom Labels and evaluate outcomes for this particular use case with one-stage fashions. Notice that a number of one-stage fashions are efficient even at decrease or resized picture resolutions, and others might accommodate massive photos in smaller batches.

Answer overview

For our use case, we use a dataset of images of PCBs with synthetically generated lacking gap pins, as proven within the following instance.

We use this dataset to display {that a} one-stage strategy utilizing object detection ends in subpar detection efficiency for the lacking gap pin defects. A two-step mannequin is most popular, during which we use Rekognition Customized Labels first for object detection to establish the pins after which a second-stage mannequin to categorise cropped photos of the pins into pins with lacking holes or regular pins.

The coaching course of for a Rekognition Customized Labels mannequin consists of a number of steps, as illustrated within the following diagram.

First, we use Amazon Simple Storage Service (Amazon S3) to retailer the picture knowledge. The information is ingested in Amazon Sagemaker Jupyter notebooks, the place usually an information scientist will examine the pictures and preprocess them, eradicating any photos which might be of poor high quality corresponding to blurred photos or poor lighting situations, and resize or crop the pictures. Then knowledge is break up into coaching and take a look at units, and Amazon SageMaker Ground Truth labeling jobs are run to label the units of photos and output a prepare and take a look at manifest file. The manifest recordsdata are utilized by Rekognition Customized Labels for coaching.

One-stage mannequin strategy

The primary strategy we take to figuring out lacking holes on the PCB is to label the lacking holes and prepare an object detection mannequin to establish the lacking holes. The next is a picture instance from the dataset.

We prepare a mannequin with a dataset with 95 photos used as coaching and 20 photos used for testing. The next desk summarizes our outcomes.

Analysis Outcomes
F1 Rating Common Precision General Recall
0.468 0.750 0.340
Coaching Time Coaching Dataset Testing Dataset
Skilled in 1.791 hours 1 label, 95 photos 1 label, 20 photos
Per Label Efficiency
Label Identify F1 Rating Take a look at Photographs Precision Recall Assumed Threshold
missing_hole 0.468 20 0.750 0.340 0.053

The ensuing mannequin has excessive precision however low recall, that means that once we localize a area for a lacking gap, we’re often right, however we’re lacking lots of lacking holes which might be current on the PCB. To construct an efficient defect detection system, we have to enhance recall. The low efficiency of this mannequin could also be as a result of defects being small on this high-resolution picture of the PCB, so the mannequin has no reference of a wholesome pin.

Subsequent, we discover splitting the picture into 4 or six crops relying on the PCB dimension and labeling each wholesome and lacking holes. The next is an instance of the ensuing cropped picture.

We prepare a mannequin with 524 photos used as coaching and 106 photos used for testing. We keep the identical PCBs utilized in prepare and take a look at as the complete board mannequin. The outcomes for cropped wholesome pins vs. lacking holes are proven within the following desk.

Analysis Outcomes
F1 Rating Common Precision General Recall
0.967 0.989 0.945
Coaching Time Coaching Dataset Testing Dataset
Skilled in 2.118 hours 2 labels, 524 photos 2 labels, 106 photos
Per Label Efficiency
Label Identify F1 Rating Take a look at Photographs Precision Recall Assumed Threshold
missing_hole 0.949 42 0.980 0.920 0.536
pin 0.984 106 0.998 0.970 0.696

Each precision and recall have improved considerably. Coaching the mannequin with zoomed-in cropped photos and a reference to the mannequin for wholesome pins helped. Nevertheless, recall remains to be at 92%, that means that we’d nonetheless miss 8% of the lacking holes and let defects go by unnoticed.

Subsequent, we discover a two-stage mannequin strategy during which we are able to enhance the mannequin efficiency additional.

Two-stage mannequin strategy

For the two-stage mannequin, we prepare two fashions: one for detecting pins and one for detecting if the pin is lacking or not on zoomed-in cropped photos of the pin. The next is a picture from the pin detection dataset.

The information is just like our earlier experiment, during which we cropped the PCB into 4 or six cropped photos. This time, we label all pins and don’t make any distinctions if the pin has a lacking gap or not. We prepare this mannequin with 522 photos and take a look at with 108 photos, sustaining the identical prepare/take a look at break up as earlier experiments. The outcomes are proven within the following desk.

Analysis Outcomes
F1 Rating Common Precision General Recall
1.000 0.999 1.000
Coaching Time Coaching Dataset Testing Dataset
Skilled in 1.581 hours 1 label, 522 photos 1 label, 108 photos
Per Label Efficiency
Label Identify F1 Rating Take a look at Photographs Precision Recall Assumed Threshold
pin 1.000 108 0.999 1.000 0.617

The mannequin detects the pins completely on this artificial dataset.

Subsequent, we construct the mannequin to make the excellence for lacking holes. We use cropped photos of the holes to coach the second stage of the mannequin, as proven within the following examples. This mannequin is separate from the earlier fashions as a result of it’s a classification mannequin and can be centered on the slim process of figuring out if the pin has a lacking gap.

We prepare this second-stage mannequin on 16,624 photos and take a look at on 3,266, sustaining the identical prepare/take a look at splits because the earlier experiments. The next desk summarizes our outcomes.

Analysis Outcomes
F1 Rating Common Precision General Recall
1.000 1.000 1.000
Coaching Time Coaching Dataset Testing Dataset
Skilled in 6.660 hours 2 labels, 16,624 photos 2 labels, 3,266 photos
Per Label Efficiency
Label Identify F1 Rating Take a look at Photographs Precision Recall Assumed Threshold
anomaly 1.000 88 1.000 1.000 0.960
regular 1.000 3,178 1.000 1.000 0.996

Once more, we obtain excellent precision and recall on this artificial dataset. Combining the earlier pin detection mannequin with this second-stage lacking gap classification mannequin, we are able to construct a mannequin that outperforms any single-stage mannequin.

The next desk summarizes the experiments we carried out.

Experiment Sort Description F1 Rating Precision Recall
1 One-stage mannequin Object detection mannequin to detect lacking holes on full photos 0.468 0.75 0.34
2 One-stage mannequin Object detection mannequin to detect wholesome pins and lacking holes on cropped photos 0.967 0.989 0.945
3 Two-stage mannequin Stage 1: Object detection on all pins 1.000 0.999 1.000
Stage 2: Picture classification of wholesome pin or lacking holes 1.000 1.000 1.000
Finish-to-end common 1.000 0.9995 1.000

Inference pipeline

You should use the next structure to deploy the one-stage and two-stage fashions that we described on this publish. The next important elements are concerned:

For one-stage fashions, you’ll be able to ship an enter picture to the API Gateway endpoint, adopted by Lambda for any fundamental picture preprocessing, and path to the Rekognition Customized Labels educated mannequin endpoint. In our experiments, we explored one-stage fashions that may detect solely lacking holes, and lacking holes and wholesome pins.

For 2-stage fashions, you’ll be able to equally ship a picture to the API Gateway endpoint, adopted by Lambda. Lambda acts as an orchestrator that first calls the item detection mannequin (educated utilizing Rekognition Customized Labels), which generates the area of curiosity. The unique picture is then cropped within the Lambda perform, and despatched to a different Rekognition Customized Labels classification mannequin for detecting defects in every cropped picture.

Conclusion

On this publish, we educated one- and two-stage fashions to detect lacking holes in PCBs utilizing Rekognition Customized Labels. We reported outcomes for numerous fashions; in our case, two-stage fashions outperformed different variants. We encourage prospects with high-resolution imagery from different domains to check mannequin efficiency with one- and two-stage fashions. Moreover, take into account the next methods to develop the answer:

  • Sliding window crops to your precise datasets
  • Reusing your object detection fashions in the identical pipeline
  • Pre-labeling workflows utilizing bounding field predictions

Concerning the authors

Andreas Karagounis is a Knowledge Science Supervisor at Accenture. He holds a masters in Laptop Science from Brown College. He has a background in pc imaginative and prescient and works with prospects to unravel their enterprise challenges utilizing knowledge science and machine studying.

Yogesh Chaturvedi is a Principal Options Architect at AWS with a spotlight in pc imaginative and prescient. He works with prospects to deal with their enterprise challenges utilizing cloud applied sciences. Exterior of labor, he enjoys mountaineering, touring, and watching sports activities.

Shreyas Subramanian is a Principal Knowledge Scientist, and helps prospects by utilizing machine studying to unravel their enterprise challenges utilizing the AWS platform. Shreyas has a background in large-scale optimization and machine studying, and in the usage of machine studying and reinforcement studying for accelerating optimization duties.

Selimcan “Can” Sakar is a cloud-first developer and Options Architect at AWS Accenture Enterprise Group with a concentrate on rising applied sciences corresponding to GenAI, ML, and blockchain. When he isn’t watching fashions converge, he will be seen biking or taking part in the clarinet.

Leave a Reply

Your email address will not be published. Required fields are marked *