久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

測量金屬零件孔的直徑圖片,用遠(yuǎn)心、單色相機(jī)

Measuring the diameter pictures of holes in metal parts, photographed with telecentric, monochrome camera with opencv(測量金屬零件孔的直徑圖片,用遠(yuǎn)心、單色相機(jī)和opencv拍攝)
本文介紹了測量金屬零件孔的直徑圖片,用遠(yuǎn)心、單色相機(jī)和opencv拍攝的處理方法,對(duì)大家解決問題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來一起學(xué)習(xí)吧!

問題描述

設(shè)置:

  • 相機(jī):Blackfly S Mono 20.0 MP
  • 鏡頭:光遠(yuǎn)心鏡頭 TC23080
  • 燈:16 個(gè)綠色 LED
  • Python:3.7.3
  • openCV:4.0+

抱歉圖片鏈接,但一張圖片大約 20MB,也不想失去任何質(zhì)量

圖像樣本:

https://drive.google.com/file/d/11PU-5fzvSJt1lKlmP-lQXhdsuCJPGKbN/view?usp=sharinghttps://drive.google.com/file/d/1B3lSFx8YvTYv3hzuuuYtphoHBuyEdc4o/view

案例:將有不同形狀的金屬零件,從 5x5 到 10x10 尺寸(cm).在這些金屬部件內(nèi)部有很多從 2 到 10~ 的圓孔,必須非常準(zhǔn)確地檢測出來.孔的實(shí)際尺寸是未知的,因?yàn)榭赡艿牧慵N類繁多.目標(biāo)是使用 OpenCV 編寫一個(gè)通用算法,該算法可以處理任何金屬部件并檢測圓孔.

Case: There will be metal parts with different shapes from 5x5 to 10x10 size(cm). Inside these metal parts there are plenty of circular holes from 2 to 10~ that have to be detected very accurately. The actual size of holes are unknown, as there are huge variety of possible parts. The goal is to write a generic algorithm with OpenCV, that could work with any metal parts and detect circular holes.

我們嘗試過的:我們嘗試使用 HoughCircles 算法檢測漏洞,但幾乎沒有成功.該算法要么過于敏感,要么根本沒有檢測到漏洞.我們嘗試了不同的 param1 和 param2 值,但沒有成功.在使用 HoughCircles 之前,我們也嘗試過模糊圖像并通過 Canny,但這種方法并沒有產(chǎn)生更好的結(jié)果.同樣的算法在分辨率較低的圖片上效果明顯更好.但是,不能犧牲分辨率,因?yàn)闇?zhǔn)確性在這個(gè)項(xiàng)目中非常重要.

What we have tried: We have tried to detect the holes with HoughCircles algorithm with little to no success. The algorithm is either too sensitive, or it does not detect the holes at all. We have experimented with different param1 and param2 values with no success. We have also tried blurring the image and passing it through Canny before using HoughCircles, but such an approach did not produce better results. The very same algorithm works significantly better with lower resolution pictures. However, resolution cannot be sacrificed as accuracy is extremely important in this project.

https://drive.google.com/file/d/1TRdDbperi37bha0uJVALS4C2dBuaNz6u/view?usp=sharing

使用以下參數(shù)檢測到上述圓圈:

The above circles were detected with the following parameters:

minradius=0
maxradius=0
dp=1
param1=100
param2=21

通過玩弄上面的參數(shù),我們幾乎可以得到我們想要的結(jié)果.當(dāng)我們對(duì)不同的圖片使用相同的參數(shù)時(shí),就會(huì)出現(xiàn)問題.

By playing around with the above parameters, we can get almost the results that we want. The problem arises when we use the same parameters with different pictures.

我們想要得到的最終結(jié)果是給定圓的直徑非常準(zhǔn)確,我們希望相同的算法可以用于不同的零件圖片

The end result we want to get is the diameter of a given circle with great accuracy, and we want the same algorithm to be usable on different part pictures

這個(gè)問題與其他問題的不同之處在于我們不知道給定圓的大致半徑(因此我們無法操縱 minradius、maxradius、param1、param2 或任何其他值).

What makes this problem different from the other ones posted is that we do not know the approximate radius of a given circle (so we cannot manipulate minradius, maxradius, param1, param2 or any other values).

推薦答案

關(guān)于這些圖片,我們知道兩件事:

We know two things about these images:

  1. 這些物體在明亮的背景上是深色的.
  2. 孔都是圓形,我們要測量所有孔.

所以我們需要做的就是檢測漏洞.這實(shí)際上是微不足道的:

So all we need to do is detect holes. This is actually quite trivial:

  1. 閾值(背景變成對(duì)象,因?yàn)樗芰?
  2. 移除邊緣對(duì)象

剩下的是洞.不包括任何接觸圖像邊緣的孔.我們現(xiàn)在可以輕松測量這些孔.由于我們假設(shè)它們是循環(huán)的,我們可以做三件事:

What is left are the holes. Any holes touching the image edge will not be included. We can now easily measure these holes. Since we assume they're circular, we can do three things:

  1. 計(jì)算對(duì)象像素,這是對(duì)區(qū)域的無偏估計(jì).我們根據(jù)面積確定孔徑.
  2. 檢測輪廓,找到質(zhì)心,然后使用例如以輪廓點(diǎn)到質(zhì)心的平均距離為半徑.
  3. 將圖像強(qiáng)度歸一化,使背景照明的強(qiáng)度為 1,其中有孔的物體的強(qiáng)度為 0.每個(gè)孔的強(qiáng)度的積分是該區(qū)域的亞像素精度估計(jì)值(請(qǐng)參閱在底部快速解釋此方法).

這個(gè) Python 代碼,使用 DIPlib (免責(zé)聲明:我是作者)展示了如何做這三種方法:

This Python code, using DIPlib (disclaimer: I'm an author) shows how to do these three approaches:

import diplib as dip
import numpy as np

img = dip.ImageRead('geriausias.bmp')
img.SetPixelSize(1,'um') # Usually this info is in the image file
bin, thresh = dip.Threshold(img)
bin = dip.EdgeObjectsRemove(bin)
bin = dip.Label(bin)
msr = dip.MeasurementTool.Measure(bin, features=['Size','Radius'])
print(msr)
d1 = np.sqrt(np.array(msr['Size'])[:,0] * 4 / np.pi)
print("method 1:", d1)
d2 = np.array(msr['Radius'])[:,1] * 2
print("method 2:", d2)

bin = dip.Dilation(bin, 10) # we need larger regions to average over so we take all of the light
                            # coming through the hole into account.
img = (dip.ErfClip(img, thresh, thresh/4, "range") - (thresh*7/8)) / (thresh/4)
msr = dip.MeasurementTool.Measure(bin, img, features=['Mass'])
d3 = np.sqrt(np.array(msr['Mass'])[:,0] * 4 / np.pi)
print("method 3:", d3)

這給出了輸出:

  |       Size |                                            Radius | 
- | ---------- | ------------------------------------------------- | 
  |            |        Max |       Mean |        Min |     StdDev | 
  |      (μm2) |       (μm) |       (μm) |       (μm) |       (μm) | 
- | ---------- | ---------- | ---------- | ---------- | ---------- | 
1 |  6.282e+04 |      143.9 |      141.4 |      134.4 |      1.628 | 
2 |  9.110e+04 |      171.5 |      170.3 |      168.3 |     0.5643 | 
3 |  6.303e+04 |      143.5 |      141.6 |      133.9 |      1.212 | 
4 |  9.103e+04 |      171.6 |      170.2 |      167.3 |     0.6292 | 
5 |  6.306e+04 |      143.9 |      141.6 |      126.5 |      2.320 | 
6 |  2.495e+05 |      283.5 |      281.8 |      274.4 |     0.9805 | 
7 |  1.176e+05 |      194.4 |      193.5 |      187.1 |     0.6303 | 
8 |  1.595e+05 |      226.7 |      225.3 |      219.8 |     0.8629 | 
9 |  9.063e+04 |      171.0 |      169.8 |      167.6 |     0.5457 | 

method 1: [282.8250363  340.57242408 283.28834869 340.45277017 283.36249824
 563.64770132 386.9715443  450.65294139 339.70023023]
method 2: [282.74577033 340.58808144 283.24878097 340.43862835 283.1641869
 563.59706479 386.95245928 450.65392268 339.68617582]
method 3: [282.74836803 340.56787463 283.24627163 340.39568372 283.31396961
 563.601641   386.89884807 450.62167913 339.68954136]

圖像bin,在調(diào)用dip.Label之后,是一個(gè)整數(shù)圖像,其中洞1的像素都為1,洞2的像素都為2,以此類推. 所以我們?nèi)匀槐A魷y量尺寸和它們是哪些孔之間的關(guān)系.我沒有費(fèi)心制作顯示圖像尺寸的標(biāo)記圖像,但是正如您在其他答案中看到的那樣,這很容易做到.

The image bin, after calling dip.Label, is an integer image where pixels for hole 1 all have value 1, those for hole 2 have value 2, etc. So we still keep the relationship between measured sizes and which holes they were. I have not bothered making a markup image showing the sizes on the image, but this can easily be done as you've seen in other answers.

因?yàn)閳D像文件中沒有像素大小信息,所以我對(duì)每個(gè)像素強(qiáng)加了 1 微米.這可能不正確,您必須進(jìn)行校準(zhǔn)以獲取像素大小信息.

Because there is no pixel size information in the image files, I've imposed 1 micron per pixel. This is likely not correct, you will have to do a calibration to obtain pixel size information.

這里的一個(gè)問題是背景照明太亮,導(dǎo)致像素飽和.這會(huì)導(dǎo)致孔看起來比實(shí)際更大.校準(zhǔn)系統(tǒng)非常重要,以便背景照明接近相機(jī)可以記錄的最大值,但不能達(dá)到最大值或更高.例如,嘗試將背景強(qiáng)度設(shè)為 245 或 250.第 3 種方法受光照不好的影響最大.

A problem here is that the background illumination is too bright, giving saturated pixels. This causes the holes to appear larger than they actually are. It is important to calibrate the system so that the background illumination is close to the maximum that can be recorded by the camera, but not at that maximum nor above. For example, try to get the background intensity to be 245 or 250. The 3rd method is most affected by bad illumination.

對(duì)于第二張圖像,亮度非常低,導(dǎo)致圖像噪點(diǎn)過多.我需要將 bin = dip.Label(bin) 行修改為:

For the second image, the brightness is very low, giving a more noisy image than necessary. I needed to modify the line bin = dip.Label(bin) into:

bin = dip.Label(bin, 2, 500) # Imposing minimum object size rather than filtering

改為進(jìn)行一些噪聲過濾可能更容易.輸出是:

It's maybe easier to do some noise filtering instead. The output was:

  |       Size |                                            Radius | 
- | ---------- | ------------------------------------------------- | 
  |            |        Max |       Mean |        Min |     StdDev | 
  |      (μm2) |       (μm) |       (μm) |       (μm) |       (μm) | 
- | ---------- | ---------- | ---------- | ---------- | ---------- | 
1 |  4.023e+06 |      1133. |      1132. |      1125. |     0.4989 | 

method 1: [2263.24621554]
method 2: [2263.22724164]
method 3: [2262.90068056]


快速解釋方法#3

該方法在 Lucas van Vliet 的博士論文中有描述(代爾夫特理工大學(xué),1993 年),第 6 章.

這樣想:通過孔的光量與孔的面積成正比(實(shí)際上它由面積"x光強(qiáng)度"給出).通過將通過孔的所有光相加,我們就知道了孔的面積.該代碼將對(duì)象的所有像素強(qiáng)度以及對(duì)象外部的一些像素相加(我在那里使用 10 個(gè)像素,要走多遠(yuǎn)取決于模糊).

Think of it this way: the amount of light that comes through the hole is proportional to the area of the hole (actually it is given by 'area' x 'light intensity'). By adding up all the light that comes through the hole, we know the area of the hole. The code adds up all pixel intensities for the object as well as some pixels just outside the object (I'm using 10 pixels there, how far out to go depends on the blurring).

erfclip 函數(shù)稱為軟剪輯".函數(shù),它確保孔內(nèi)的強(qiáng)度一致為 1,孔外的強(qiáng)度一致為 0,并且僅在邊緣周圍留下中間灰度值.在這種特殊情況下,此軟剪輯避免了成像系統(tǒng)中的偏移以及對(duì)光強(qiáng)度的不良估計(jì)的一些問題.在其他情況下,避免被測物體顏色不均勻的問題更為重要.它還可以減少噪音的影響.

The erfclip function is called a "soft clip" function, it ensures that the intensity inside the hole is uniformly 1, and the intensity outside the hole is uniformly 0, and only around the edges it leaves intermediate gray-values. In this particular case, this soft clip avoids some issues with offsets in the imaging system, and poor estimates of the light intensity. In other cases it is more important, avoiding issues with uneven color of the objects being measured. It also reduces the influence of noise.

這篇關(guān)于測量金屬零件孔的直徑圖片,用遠(yuǎn)心、單色相機(jī)和opencv拍攝的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

【網(wǎng)站聲明】本站部分內(nèi)容來源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

相關(guān)文檔推薦

How to draw a rectangle around a region of interest in python(如何在python中的感興趣區(qū)域周圍繪制一個(gè)矩形)
How can I detect and track people using OpenCV?(如何使用 OpenCV 檢測和跟蹤人員?)
How to apply threshold within multiple rectangular bounding boxes in an image?(如何在圖像的多個(gè)矩形邊界框中應(yīng)用閾值?)
How can I download a specific part of Coco Dataset?(如何下載 Coco Dataset 的特定部分?)
Detect image orientation angle based on text direction(根據(jù)文本方向檢測圖像方向角度)
Detect centre and angle of rectangles in an image using Opencv(使用 Opencv 檢測圖像中矩形的中心和角度)
主站蜘蛛池模板: 久久精品亚洲一区二区三区浴池 | 在线播放中文字幕 | 久久久久久国产精品免费免费 | 女人av | 中文字幕在线看 | a在线免费观看视频 | 色婷婷狠狠 | 国产激情一区二区三区 | 欧美一区二区三区在线免费观看 | 日韩中文字幕免费在线观看 | 午夜视频在线免费观看 | 天天操网| 麻豆av网站 | 国产网站久久 | 中文字幕一区二区三区精彩视频 | 亚洲精品一区二区三区蜜桃久 | 亚洲成av人片在线观看无码 | 男人的天堂中文字幕 | 亚洲精品日韩在线 | 成人免费在线 | 九九99靖品| 中文字幕一级毛片 | 亚洲精品电影在线观看 | 两性午夜视频 | www.亚洲视频 | 伊人二区 | 国产一区二区三区欧美 | 精品无码久久久久国产 | 久久r精品 | 国产一区二区三区在线免费观看 | 中文字幕在线免费观看 | 国产精品视频一区二区三区 | 一级片免费视频 | 黄色香蕉视频在线观看 | 黄色在线免费看 | 欧美午夜影院 | 日韩在线免费视频 | 成人午夜电影网 | 国产一区二区三区视频在线观看 | 欧美日韩在线免费观看 | 特级生活片 |