問題描述
我想開發一個 Python OpenCV 腳本來復制/改進我開發的 Gimp 程序.該過程的目標是提供一個 x,y 點陣列,該陣列遵循草地和硬表面之間的分界線.這個陣列可以讓我完成我的 500 磅 54 英寸寬的壓力清洗機器人,它有一個 Raspberry Pi Zero(和攝像頭),這樣它就可以以每秒幾英寸的速度跟隨那個邊緣.我將監控和/或當我在沙發上看電視時,通過它的 wifi 視頻流和 iPhone 應用程序控制機器人.
這是一個示例原始圖像(60x80 像素):
Gimp 程序是:
- 將圖像轉換為索引的 2 種顏色.基本上一側是草,另一側是磚塊或人行道.糟糕的陰影 哎呀,這就是我 :)
- 在這兩種顏色中,取較低的色相值,并使用下面的魔杖設置在該值的像素上使用魔杖.色調設置為 23 是我去除陰影的方法,羽毛設置是 15 是我去除島嶼/鋸齒(裂縫中的草:)的方法.
- 使用以下高級設置值對路徑進行高級選擇(默認值的更改為黃色).基本上我只想要線段,我的 (x,y) 點數組將是黃色路徑點.
- 接下來,我將路徑導出到一個 .xml 文件,我可以從中解析和隔離上圖中的黃點.這是 .xml 文件:
<?xml version="1.0" encoding="UTF-8" Standalone="no"?><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd"><svg xmlns="http://www.w3.org/2000/svg"寬度="0.833333in" 高度="1.11111in"viewBox="0 0 60 80"> </svg>
我的 Pi Zero 上這個 OpenCV 程序的執行時間目標是大約 1-2 秒或更短(目前大約需要 0.18 秒).
我拼湊了一些導致 Gimp xml 文件中相同點的東西.我完全不確定它是否在做 Gimp 關于遮罩的色調范圍所做的事情.我還沒有弄清楚如何在面罩上應用最小半徑,我很確定當面罩在硬表面邊緣出現草"塊作為面罩的一部分時,我將需要它.以下是目前為止的所有輪廓點(ptscanvas.bmp):
截至 2018 年 7 月 6 日美國東部標準時間下午 5:08,這是一個仍然混亂"的腳本,它可以工作并找到這些點;
import numpy as np導入時間、sys、cv2img = cv2.imread('2-60.JPG')cv2.imshow('原創',img)# 得到一個空白的 pntscanvas 用于在上面繪制點pntscanvas = np.zeros(img.shape, np.uint8)打印(系統版本)如果 sys.version_info[0] <3:raise Exception("需要 Python 3 或更新的版本.")定義多雷多():start_time = time.time()# 使用 kmeans 轉換為 2 色圖像hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)Z = hsv_img.reshape((-1,3))Z = np.float32(Z)# 定義標準,簇數(K)標準 = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)K = 2ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)# 通過選擇 2 種顏色的最低色調周圍的色調范圍來創建蒙版如果中心[0,0] <中心[1,0]:hueofinterest = 中心 [0,0]別的:hueofinterest = 中心[1,0]hsvdelta = 8lowv = np.array([hueofinterest - hsvdelta, 0, 0])higv = np.array([hueofinterest + hsvdelta, 255, 255])掩碼 = cv2.inRange(hsv_img, lowv, higv)# 從蒙版中提取輪廓ret,thresh = cv2.threshold(掩碼,250,255,cv2.THRESH_BINARY_INV)im2,輪廓,層次結構 = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)# 找到面積最大的輪廓cnt = 輪廓[0]max_area = cv2.contourArea(cnt)對于輪廓中的 cont:如果 cv2.contourArea(cont) >最大面積:cnt = 續max_area = cv2.contourArea(續)# 將 largets 輪廓的所有邊緣點組成一個數組,命名為 allpnts周長 = cv2.arcLength(cnt,True)epsilon = 0.01*cv2.arcLength(cnt,True) # 0.0125*cv2.arcLength(cnt,True) 似乎效果更好allpnts = cv2.approxPolyDP(cnt,epsilon,True)end_time = time.time()print("經過的 cv2 時間是 %g 秒" % (end_time - start_time))# 轉回uint8,制作2色圖片保存顯示中心 = np.uint8(中心)res = 中心 [label.flatten()]res2 = res.reshape((hsv_img.shape))# 保存、顯示和打印內容cv2.drawContours(pntscanvas, allpnts, -1, (0, 0, 255), 2)cv2.imwrite("pntscanvas.bmp", pntscanvas)cv2.imshow("pntscanvas.bmp", pntscanvas)打印('allpnts')打印(全部)打印(中心")打印(中心)打印('lowv',lowv)打印('higv',higv)cv2.imwrite('mask.bmp',mask)cv2.imshow('mask.bmp',mask)cv2.imwrite('CvKmeans2Color.bmp',res2)cv2.imshow('CvKmeans2Color.bmp',res2)打印(等待‘空格鍵’執行/重做或‘Esc’退出")而(1):ch = cv2.waitKey(50)如果 ch == 27:休息如果 ch == ord(' '):多雷多()cv2.destroyAllWindows()
剩下的事情:
- 在非邊緣像素上添加蒙版半徑以處理原始蒙版,例如 Gimp 在蒙版上運行最小半徑之前創建的蒙版:
1a.截至 2018 年 7 月 9 日,我一直專注于這個問題,因為這似乎是我最大的問題.我無法讓 cv2.findcontours 像 Gimp 那樣用它的魔杖半徑功能平滑邊緣草".左側是 2 色問題"蒙版和疊加的紅色"點,它們直接使用 cv2.findcontours 找到,右側是在 cv2 之前應用于左側圖像問題"蒙版的 Gimp 半徑蒙版.findcontours 應用于它,從而產生正確的圖像和點:
我曾嘗試查看 Gimps 源代碼,但它超出了我的理解范圍,我找不到任何可以執行此操作的 OpenCV 例程.有沒有辦法對 OpenCV 中邊緣蒙版的非邊緣"像素應用最小半徑平滑???非邊緣"是指如您所見,Gimp 不會對這些角"(黃色高亮內部)進行半徑處理,但似乎僅將半徑平滑應用于圖像內部"的邊緣(注意:Gimps 半徑算法消除了所有掩碼中的小島,這意味著您不必在應用 cv2.findcontours 后找到最大區域輪廓來獲取興趣點):
- 從位于圖像邊緣的所有 pnt 中刪除不相關的數組點.
- 弄清楚為什么它發現的數組點似乎圍繞著綠草而不是硬表面,我以為我正在處理硬表面色調.
- 弄清楚為什么 CvKmeans2Color.bmp 中的硬表面顏色顯示為橙色,而不是 Gimps 轉換中的米色,為什么這與 Gimps 轉換中的像素不匹配?這是 CvKmeans2Color.bmp 和 Gimps:
截至 2018 年 7 月 12 日美國東部標準時間下午 5 點:我已經使用了我最容易使用 VB6 創建代碼的語言,嗯,我知道.無論如何,我已經能夠制作一個在像素級別上工作的線/邊緣平滑例程來完成我想要的最小半徑蒙版.它的工作方式就像一個 PacMan 沿著邊緣的右側漫游,盡可能靠近它,并在 Pac 的左側留下一個面包屑軌跡.不確定我是否可以從該代碼制作 python 腳本,但至少我有一個起點,因為沒有人確認有 OpenCV 替代方法可以做到這一點.如果有人有興趣
您可以通過這個 VB6 例程了解我的流程邏輯的要點:
Sub BeginFollowingEdgePixel()將 lastwasend 調暗為整數wasinside = 假雖然 (1)如果 HitFrontBumper 那么轉到命中別的呼叫前進萬一如果 circr = orgpos(0) 并且 circc = orgpos(1) 那么orgpixr = -1 '將開始/下一步按鈕重置為首先找到的藍色邊緣像素GoTo outnow '這個條件表明你已經跟隨了所有的藍色邊緣像素萬一調用 PaintUnderFrontBumperWhite調用 PaintGreenOutsideLeftBumper挪動:If NoLeftBumperContact Then呼叫向左移動調用 PaintUnderLeftBumperWhite調用 PaintGreenOutsideLeftBumperIf NoLeftBumperContact ThenIf BackBumperContact Then呼叫 MakeLeftTheNewForward萬一萬一ElseIf HitFrontBumper Then打:調用 PaintAheadOfForwardBumperGreen調用 PaintGreenOutsideLeftSide致電 MakeRightTheNewForward轉到 nomove別的調用 PaintAheadOfForwardBumperGreen調用 PaintGreenOutsideLeftSide調用 PaintUnderFrontBumperWhite萬一如果 (circr = 19 + circrad 或 circr = -circrad 或 circc = 19 + circrad 或 circc = -circrad) 那么如果 lastwasend = 0 并且 wasinside = True 那么'完成跟隨一個邊緣像素最后發送 = 1轉到現在調用重繪萬一別的如果 IsCircleInsideImage 則wasinside = 真萬一最后發送 = 0萬一Pause (pausev) ' 移動之間的秒數 - 按 Esc 提前推進文德現在出來:結束子
好吧,我終于有時間看這個了.我將解決您的每一點,然后顯示代碼中的更改.如果您有任何問題或建議,請告訴我.
看來你自己也能很好地做到這一點.
1.a.這可以通過在對圖像進行任何處理之前模糊圖像來解決.為了實現這一點,對代碼進行了以下更改;
<代碼>...start_time = time.time()blur_img = cv2.GaussianBlur(img,(5,5),0) #這里# 使用 kmeans 轉換為 2 色圖像hsv_img = cv2.cvtColor(blur_img, cv2.COLOR_BGR2HSV)...
我已更改代碼以刪除完全符合圖像側面的直線上的點.草邊應該基本不可能也與此重合.
<代碼>...allpnts = cv2.approxPolyDP(cnt,epsilon,True)new_allpnts = []對于我在范圍內(len(allpnts)):a = (i-1) % 長度 (allpnts)b = (i+1) % 長度 (allpnts)if ((allpnts[i,0,0] == 0 或 allpnts[i,0,0] == (img.shape[1]-1)) 和 (allpnts[i,0,1] == 0 或allpnts[i,0,1] == (img.shape[0]-1))):tmp1 = allpnts[a,0] - allpnts[i,0]tmp2 = allpnts[b,0] - allpnts[i,0]如果不是(tmp1 中為 0,tmp2 中為 0):new_allpnts.append(allpnts[i])別的:new_allpnts.append(allpnts[i])...cv2.drawContours(pntscanvas, new_allpnts, -1, (0, 0, 255), 2)...
由于如何在圖像中找到輪廓,我們可以簡單地翻轉閾值函數并找到圖像其他部分周圍的輪廓.變化如下;
<代碼>...#從蒙版中提取輪廓ret,thresh = cv2.threshold(mask,250,255,cv2.THRESH_BINARY) #這里im2,輪廓,層次結構 = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)...
至于顏色差異,您已將圖像轉換為 HSV 格式,并且在保存之前您不會將其切換回 BGR.對 HSV 的這種更改確實會給您帶來更好的結果,所以我會保留它,但它是一個不同的調色板.變化如下;
<代碼>...cv2.imshow('mask.bmp',mask)res2 = cv2.cvtColor(res2, cv2.COLOR_HSV2BGR)cv2.imwrite('CvKmeans2Color.bmp',res2)cv2.imshow('CvKmeans2Color.bmp',res2)...
免責聲明:這些更改基于上面的 python 代碼.對不在提供代碼中的 python 代碼的任何更改都會使我的更改無效.
I would like to develop a Python OpenCV script to duplicate/improve on a Gimp procedure I have developed. The goal of the procedure is to provide an x,y point array that follows the dividing line between grass and hard surfaces. This array will allow me to finish my 500 lb 54" wide pressure washing robot, which has a Raspberry Pi Zero (and camera), so that it can follow that edge at a speed of a couple inches per second. I will be monitoring and/or controlling the bot via its wifi video stream and an iPhone app while I watch TV on my couch.
Here is a sample original image (60x80 pixels):
The Gimp procedure is:
- Convert image to indexed 2 colors. Basically grass on one side and bricks or pavement on the other side. DARN SHADOWS oops that's me :)
- Of the two colors, take the lower Hue value and magic wand on a pixel of that value with the below wand settings. The Hue setting of 23 is how I remove shadows and the feather setting of 15 is how I remove islands/jaggies (grass in the cracks :).
- Do an advanced selection to path with the following advanced settings values (changes from default values are yellow). Basically I want just line segments and my (x,y) point array will be the Yellow path dots.
- Next I export the path to an .xml file from which I can parse and isolate the yellow dots in the above image. Here is the .xml file:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg xmlns="http://www.w3.org/2000/svg"
width="0.833333in" height="1.11111in"
viewBox="0 0 60 80">
<path id="Selection"
fill="none" stroke="black" stroke-width="1"
d="M 60.00,0.00
C 60.00,0.00 60.00,80.00 60.00,80.00
60.00,80.00 29.04,80.00 29.04,80.00
29.04,80.00 29.04,73.00 29.04,73.00
29.04,73.00 30.00,61.00 30.00,61.00
30.00,61.00 30.00,41.00 30.00,41.00
30.00,41.00 29.00,30.85 29.00,30.85
29.00,30.85 24.00,30.85 24.00,30.85
24.00,30.85 0.00,39.00 0.00,39.00
0.00,39.00 0.00,0.00 0.00,0.00
0.00,0.00 60.00,0.00 60.00,0.00 Z" />
</svg>
My goal for execution time for this OpenCV procedure on my Pi Zero is about 1-2 seconds or less (currently taking ~0.18 secs).
I have cobbled together something that sortof results in the sameish points that are in the Gimp xml file. I am not sure at all if it is doing what Gimp does with regard to the hue range of the mask. I have not yet figured out how to apply the minimum radius on the mask, I am pretty sure I will need that when the mask gets a 'grass' clump on the edge of the hard surface as part of the mask. Here are all the contour points so far (ptscanvas.bmp):
As of 7/6/2018 5:08 pm EST, here is the 'still messy' script that sortof works and found those points;
import numpy as np
import time, sys, cv2
img = cv2.imread('2-60.JPG')
cv2.imshow('Original',img)
# get a blank pntscanvas for drawing points on
pntscanvas = np.zeros(img.shape, np.uint8)
print (sys.version)
if sys.version_info[0] < 3:
raise Exception("Python 3 or a more recent version is required.")
def doredo():
start_time = time.time()
# Use kmeans to convert to 2 color image
hsv_img = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
Z = hsv_img.reshape((-1,3))
Z = np.float32(Z)
# define criteria, number of clusters(K)
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 2
ret,label,center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Create a mask by selecting a hue range around the lowest hue of the 2 colors
if center[0,0] < center[1,0]:
hueofinterest = center[0,0]
else:
hueofinterest = center[1,0]
hsvdelta = 8
lowv = np.array([hueofinterest - hsvdelta, 0, 0])
higv = np.array([hueofinterest + hsvdelta, 255, 255])
mask = cv2.inRange(hsv_img, lowv, higv)
# Extract contours from the mask
ret,thresh = cv2.threshold(mask,250,255,cv2.THRESH_BINARY_INV)
im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
# Find the biggest area contour
cnt = contours[0]
max_area = cv2.contourArea(cnt)
for cont in contours:
if cv2.contourArea(cont) > max_area:
cnt = cont
max_area = cv2.contourArea(cont)
# Make array of all edge points of the largets contour, named allpnts
perimeter = cv2.arcLength(cnt,True)
epsilon = 0.01*cv2.arcLength(cnt,True) # 0.0125*cv2.arcLength(cnt,True) seems to work better
allpnts = cv2.approxPolyDP(cnt,epsilon,True)
end_time = time.time()
print("Elapsed cv2 time was %g seconds" % (end_time - start_time))
# Convert back into uint8, and make 2 color image for saving and showing
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((hsv_img.shape))
# Save, show and print stuff
cv2.drawContours(pntscanvas, allpnts, -1, (0, 0, 255), 2)
cv2.imwrite("pntscanvas.bmp", pntscanvas)
cv2.imshow("pntscanvas.bmp", pntscanvas)
print('allpnts')
print(allpnts)
print("center")
print(center)
print('lowv',lowv)
print('higv',higv)
cv2.imwrite('mask.bmp',mask)
cv2.imshow('mask.bmp',mask)
cv2.imwrite('CvKmeans2Color.bmp',res2)
cv2.imshow('CvKmeans2Color.bmp',res2)
print ("Waiting for 'Spacebar' to Do/Redo OR 'Esc' to Exit")
while(1):
ch = cv2.waitKey(50)
if ch == 27:
break
if ch == ord(' '):
doredo()
cv2.destroyAllWindows()
Left to do:
- Add mask radiusing on non-edge pixels to take care of raw masks like this one that Gimp creates before it runs a min radius on the mask:
1a. EDIT: As of July 9, 2018, I have been concentrating on this issue as it seems to be my biggest problem. I am unable to have cv2.findcontours smooth out the 'edge grass' as well as Gimp does with its magic wand radius feature. Here on the left, is a 2 colour 'problem' mask and the overlaid resultant 'Red' points that are found directly using cv2.findcontours and on the right, the Gimp radiused mask applied to the left images 'problem' mask before cv2.findcontours is applied to it, resulting in the right image and points:
I have tried looking at Gimps source code but it is way beyond my comprehension and I can not find any OpenCV routines that can do this. Is there a way to apply a minimum radius smoothing to the 'non-edge' pixels of an edge mask in OpenCV??? By 'non-edge' I mean that as you can see Gimp does not radius these 'corners' (inside Yellow highlight) but only seems to apply the radius smoothing to edges 'inside' the image (Note: Gimps radiusing algorithm eliminates all the small islands in the mask which means that you don't have to find the largest area contour after cv2.findcontours is applied to get the points of interest):
- Remove irrelevant array points from allpnts that are on the image edge.
- Figure out why the array points that it finds seem to border around the green grass instead of the hard surface, I thought I was working with the hard surface hue.
- Figure out why the hard surface color in CvKmeans2Color.bmp appears orange and not beige as in Gimps conversion AND why doesn't this match pixel for pixel with Gimps conversion? Here is CvKmeans2Color.bmp and Gimps:
EDIT: As of 5pm EST July 12, 2018: I have resorted to the language I can most easily create code with, VB6, ughh, I know. Anyway I have been able to make a line/edge smoothing routine that works on the pixel level to do the min radius mask I want. It works like a PacMan roaming along the right side of an edge as close at it can and leaves behind a breadcrumb trail on the Pac's left side. Not sure I can make a python script from that code but at least I have a place to start as nobody has confirmed that there is an OpenCV alternative way to do it. If anyone is interested here is a compiled .exe file that should run on most windows systems without an install (I think). Here is a screenshot from it (Blue/GreenyBlue pixels are the unsmoothed edge and Green/GreenyBlue pixels are the radiused edge):
You can get the gist of my process logic by this VB6 routine:
Sub BeginFollowingEdgePixel()
Dim lastwasend As Integer
wasinside = False
While (1)
If HitFrontBumper Then
GoTo Hit
Else
Call MoveForward
End If
If circr = orgpos(0) And circc = orgpos(1) Then
orgpixr = -1 'resets Start/Next button to begin at first first found blue edge pixel
GoTo outnow 'this condition indicates that you have followed all blue edge pixels
End If
Call PaintUnderFrontBumperWhite
Call PaintGreenOutsideLeftBumper
nomove:
If NoLeftBumperContact Then
Call MoveLeft
Call PaintUnderLeftBumperWhite
Call PaintGreenOutsideLeftBumper
If NoLeftBumperContact Then
If BackBumperContact Then
Call MakeLeftTheNewForward
End If
End If
ElseIf HitFrontBumper Then
Hit:
Call PaintAheadOfForwardBumperGreen
Call PaintGreenOutsideLeftSide
Call MakeRightTheNewForward
GoTo nomove
Else
Call PaintAheadOfForwardBumperGreen
Call PaintGreenOutsideLeftSide
Call PaintUnderFrontBumperWhite
End If
If (circr = 19 + circrad Or circr = -circrad Or circc = 19 + circrad Or circc = -circrad) Then
If lastwasend = 0 And wasinside = True Then
'finished following one edge pixel
lastwasend = 1
GoTo outnow
Call redrawit
End If
Else
If IsCircleInsideImage Then
wasinside = True
End If
lastwasend = 0
End If
Pause (pausev) 'seconds between moves - Pressing Esc advances early
Wend
outnow:
End Sub
Okay, I finally had time to look at this. I will address each point of yours and then show the changes in the code. Let me know if you have any questions, or suggestions.
Looks like you were able to do this yourself well enough.
1.a. This can be taken care of by blurring the image before doing any processing to it. The following changes to the code were made to accomplish this;
... start_time = time.time() blur_img = cv2.GaussianBlur(img,(5,5),0) #here # Use kmeans to convert to 2 color image hsv_img = cv2.cvtColor(blur_img, cv2.COLOR_BGR2HSV) ...
I have changed the code to remove points that are on a line that perfectly follows the side of the image. It should be basically impossible for a grass edge to also coincide with this.
... allpnts = cv2.approxPolyDP(cnt,epsilon,True) new_allpnts = [] for i in range(len(allpnts)): a = (i-1) % len(allpnts) b = (i+1) % len(allpnts) if ((allpnts[i,0,0] == 0 or allpnts[i,0,0] == (img.shape[1]-1)) and (allpnts[i,0,1] == 0 or allpnts[i,0,1] == (img.shape[0]-1))): tmp1 = allpnts[a,0] - allpnts[i,0] tmp2 = allpnts[b,0] - allpnts[i,0] if not (0 in tmp1 and 0 in tmp2): new_allpnts.append(allpnts[i]) else: new_allpnts.append(allpnts[i]) ... cv2.drawContours(pntscanvas, new_allpnts, -1, (0, 0, 255), 2) ...
Due to how the contours are found in the image, we can simply flip the thresholding function and find the contour around the other part of the image. Changes are below;
... #Extract contours from the mask ret,thresh = cv2.threshold(mask,250,255,cv2.THRESH_BINARY) #here im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) ...
As for the color differences, you have converted your image into HSV format and before saving you are not switching it back to BGR. This change to HSV does give you better results so I would keep it, but it is a different palette. Changes are below;
... cv2.imshow('mask.bmp',mask) res2 = cv2.cvtColor(res2, cv2.COLOR_HSV2BGR) cv2.imwrite('CvKmeans2Color.bmp',res2) cv2.imshow('CvKmeans2Color.bmp',res2) ...
Disclaimer: These changes are based off of the python code from above. Any changes to the python code that are not in the provide code my render my changes ineffective.
這篇關于Gimp 程序中的 OpenCV Python 腳本 - 草/硬表面邊緣檢測的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!