Abstract:
Printed circuit boards (PCBs) are critical components of electronic devices, making high-precision nondestructive testing essential for ensuring product quality. The accurate identification of key elements such as traces, pads, and vias is crucial. Although deep learning has been applied to the automatic inspection of PCBs, its reliance on large amounts of annotated data results in high costs. While strategies such as “unsupervised pre-training combined with supervised fine-tuning” have reduced dependency on annotations to some extent and improved segmentation accuracy, their generalization capability in real-world open scenarios remains insufficient. Vision foundation models such as the segment-anything model (SAM), with their powerful general segmentation ability and reduced demand for annotations, offer a new direction for this field. SAM's real-time interactive functionality has facilitated the development of a “pre-training + fine-tuning + human–machine collaboration” inspection paradigm. By incorporating expert knowledge to dynamically guide the model, this approach effectively enhances the accuracy and robustness of PCB image segmentation in complex environments, promoting technological implementation. However, directly applying SAM to PCB element segmentation still faces challenges such as limited cross-scenario adaptability, deployment difficulties, and the need to further optimize prompting methods. This paper systematically reviews recent progress in this field, focusing on innovations in SAM-oriented fine-tuning strategies, lightweight designs, and structural optimization. Future research directions are also discussed.