久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

Python OpenCV HoughLinesP 無(wú)法檢測(cè)線

Python OpenCV HoughLinesP Fails to Detect Lines(Python OpenCV HoughLinesP 無(wú)法檢測(cè)線)
本文介紹了Python OpenCV HoughLinesP 無(wú)法檢測(cè)線的處理方法,對(duì)大家解決問(wèn)題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)吧!

問(wèn)題描述

我正在使用 OpenCV HoughlinesP 來(lái)查找水平線和垂直線.它大部分時(shí)間都沒(méi)有找到任何線路.即使它找到一條線,它甚至與實(shí)際圖像都不接近.

導(dǎo)入 cv2將 numpy 導(dǎo)入為 npimg = cv2.imread('image_with_edges.jpg')灰色 = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)標(biāo)志,b = cv2.threshold(灰色,0,255,cv2.THRESH_OTSU)元素 = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))cv2.erode(b,元素)邊緣 = cv2.Canny(b,10,100,apertureSize = 3)線 = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()對(duì)于 x1,y1,x2,y2 行:對(duì)于枚舉(行)中的索引(x3,y3,x4,y4):if y1==y2 and y3==y4: # 水平線差異=絕對(duì)(y1-y3)elif x1==x2 and x3==x4: # 垂直線差異=絕對(duì)(x1-x3)別的:差異 = 0如果差異 <10 且 diff 不為 0:刪除線[索引]gridsize = (len(lines) - 2)/2cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)cv2.imwrite('houghlines3.jpg',img)

輸入圖像:

輸出圖像:(見(jiàn)紅線):

@ljetibo 試試這個(gè):

在這一點(diǎn)之后,不難猜測(cè)出了什么問(wèn)題.但是讓我們繼續(xù),我相信你想要實(shí)現(xiàn)的是這樣的:

flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)

然后你繼續(xù),并嘗試侵蝕圖像.我不確定您為什么要這樣做,是您打算加粗"線條,還是您打算消除噪音.無(wú)論如何,您從未將侵蝕的結(jié)果分配給某物.Numpy 數(shù)組是表示圖像的方式,它是可變的,但它不是語(yǔ)法的工作方式:

cv2.erode(src, kernel, [optionalOptions] ) → dst

所以你必須寫(xiě):

b = cv2.erode(b,element)

好的,現(xiàn)在介紹元素以及侵蝕的工作原理.侵蝕將內(nèi)核拖到圖像上.內(nèi)核是一個(gè)簡(jiǎn)單的矩陣,其中包含 1 和 0.該矩陣的元素之一,通常是中心元素,稱為錨點(diǎn).錨點(diǎn)是在操作結(jié)束時(shí)將被替換的元素.當(dāng)你創(chuàng)建

cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))

您創(chuàng)建的實(shí)際上是一個(gè) 1x1 矩陣(1 列,1 行).這使得侵蝕完全無(wú)用.

腐蝕的作用是首先從原始圖像中檢索所有像素亮度值,其中與圖像片段重疊的內(nèi)核元素具有1".然后它找到檢索到的像素的最小值并將錨點(diǎn)替換為該值.

在您的情況下,這意味著您將 [1] 矩陣拖到圖像上,比較源圖像像素亮度是否大于、等于或小于自身,然后替換它與自己.

如果您的目的是去除噪音",那么在圖像上使用矩形內(nèi)核可能會(huì)更好.這樣想,噪音"就是與周?chē)h(huán)境格格不入"的東西.因此,如果您將中心像素與周?chē)h(huán)境進(jìn)行比較,發(fā)現(xiàn)它不適合,則很可能是噪聲.

另外,我說(shuō)過(guò)它用內(nèi)核檢索到的最小值替換了錨點(diǎn).數(shù)值上,最小值為 0,這恰好是圖像中黑色的表示方式.這意味著在您的主要是白色圖像的情況下,侵蝕會(huì)膨脹"黑色像素.如果 255 個(gè)值的白色像素在內(nèi)核的范圍內(nèi),侵蝕將用 0 值的黑色像素替換它們.在任何情況下,它都不應(yīng)該是形狀 (1,1).

>>>cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))數(shù)組([[0, 1, 0],[1, 1, 1],[0, 1, 0]], dtype=uint8)

如果我們用 3x3 矩形內(nèi)核腐蝕第二個(gè)圖像,我們會(huì)得到下面的圖像.

好的,現(xiàn)在我們解決了這個(gè)問(wèn)題,接下來(lái)您要做的就是使用 Canny 邊緣檢測(cè)找到邊緣.你從中得到的圖像是:

好的,現(xiàn)在我們尋找EXACTLY垂直線和EXACTLY水平線ONLY.當(dāng)然,除了圖像左側(cè)的子午線之外沒(méi)有這樣的線條(這就是它的名字嗎?),你做對(duì)后得到的最終圖像是這樣的:

現(xiàn)在,由于您從未描述過(guò)您的確切想法,而我的最佳猜測(cè)是您想要平行線和經(jīng)線,您在比例較小的地圖上會(huì)更幸運(yùn),因?yàn)樗鼈円婚_(kāi)始不是直線,而是曲線.此外,是否有特定的理由來(lái)完成概率霍夫?常規(guī)"霍夫不夠用?

抱歉,帖子太長(zhǎng)了,希望對(duì)你有所幫助.

<小時(shí)>

此處的文字是作為 OP 11 月 24 日的澄清請(qǐng)求而添加的.因?yàn)闆](méi)有辦法將答案放入字符有限的評(píng)論中.

我建議 OP 針對(duì) curves 的檢測(cè)提出一個(gè)更具體的新問(wèn)題,因?yàn)槟幚淼氖乔€ op,而不是水平和垂直 lines.

檢測(cè)曲線的方法有多種,但都不是簡(jiǎn)單的方法.按照從最簡(jiǎn)單到最難的順序:

  1. 使用 RANSAC 算法.制定一個(gè)描述多頭性質(zhì)的公式.和緯度.線路取決于相關(guān)地圖.IE.當(dāng)您靠近赤道時(shí),緯度曲線在地圖上幾乎是一條完美的直線,赤道是完美的直線,但當(dāng)您在高緯度(靠近兩極)時(shí),它會(huì)非常彎曲,類似于圓段).SciPy 已經(jīng)將

    I am using OpenCV HoughlinesP to find horizontal and vertical lines. It is not finding any lines most of the time. Even when it finds a lines it is not even close to actual image.

    import cv2
    import numpy as np
    
    img = cv2.imread('image_with_edges.jpg')
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    
    
    flag,b = cv2.threshold(gray,0,255,cv2.THRESH_OTSU)
    
    element = cv2.getStructuringElement(cv2.MORPH_CROSS,(1,1))
    cv2.erode(b,element)
    
    edges = cv2.Canny(b,10,100,apertureSize = 3)
    
    lines = cv2.HoughLinesP(edges,1,np.pi/2,275, minLineLength = 100, maxLineGap = 200)[0].tolist()
    
    for x1,y1,x2,y2 in lines:
       for index, (x3,y3,x4,y4) in enumerate(lines):
    
        if y1==y2 and y3==y4: # Horizontal Lines
            diff = abs(y1-y3)
        elif x1==x2 and x3==x4: # Vertical Lines
            diff = abs(x1-x3)
        else:
            diff = 0
    
        if diff < 10 and diff is not 0:
            del lines[index]
    
        gridsize = (len(lines) - 2) / 2
    
       cv2.line(img,(x1,y1),(x2,y2),(0,0,255),2)
       cv2.imwrite('houghlines3.jpg',img)
    

    Input Image:

    Output Image: (see the Red Line):

    @ljetibo Try this with: c_6.jpg

    解決方案

    There's quite a bit wrong here so I'll just start from the beginning.

    Ok, first thing you do after opening an image is tresholding. I recommend strongly that you have another look at the OpenCV manual on tresholding and the exact meaning of the treshold methods.

    The manual mentions that

    cv2.threshold(src, thresh, maxval, type[, dst]) → retval, dst

    the special value THRESH_OTSU may be combined with one of the above values. In this case, the function determines the optimal threshold value using the Otsu’s algorithm and uses it instead of the specified thresh .

    I know it's a bit confusing because you don't actully combine THRESH_OTSU with any of the other methods (THRESH_BINARY etc...), unfortunately that manual can be like that. What this method actually does is it assumes that there's a "foreground" and a "background" that follow a bi-modal histogram and then applies the THRESH_BINARY I believe.

    Imagine this as if you're taking an image of a cathedral or a high building mid day. On a sunny day the sky will be very bright and blue, and the cathedral/building will be quite a bit darker. This means the group of pixels belonging to the sky will all have high brightness values, that is will be on the right side of the histogram, and the pixels belonging to the church will be darker, that is to the middle and left side of the histogram.

    Otsu uses this to try and guess the right "cutoff" point, called thresh. For your image Otsu's alg. supposes that all that white on the side of the map is the background, and the map itself the foreground. Therefore your image after thresholding looks like this:

    After this point it's not hard to guess what goes wrong. But let's go on, What you're trying to achieve is, I believe, something like this:

    flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
    

    Then you go on, and try to erode the image. I'm not sure why you're doing this, was your intention to "bold" the lines, or was your intention to remove noise. In any case you never assigned the result of erosion to something. Numpy arrays, which is the way images are represented, are mutable but it's not the way the syntax works:

    cv2.erode(src, kernel, [optionalOptions] ) → dst
    

    So you have to write:

    b = cv2.erode(b,element)
    

    Ok, now for the element and how the erosion works. Erosion drags a kernel over an image. Kernel is a simple matrix with 1's and 0's in it. One of the elements of that matrix, usually centre one, is called an anchor. An anchor is the element that will be replaced at the end of the operation. When you created

    cv2.getStructuringElement(cv2.MORPH_CROSS, (1, 1))
    

    what you created is actually a 1x1 matrix (1 column, 1 row). This makes erosion completely useless.

    What erosion does, is firstly retrieves all the values of pixel brightness from the original image where the kernel element, overlapping the image segment, has a "1". Then it finds a minimal value of retrieved pixels and replaces the anchor with that value.

    What this means, in your case, is that you drag [1] matrix over the image, compare if the source image pixel brightness is larger, equal or smaller than itself and then you replace it with itself.

    If your intention was to remove "noise", then it's probably better to use a rectangular kernel over the image. Think of it this way, "noise" is that thing that "doesn't fit in" with the surroundings. So if you compare your centre pixel with it's surroundings and you find it doesn't fit, it's most likely noise.

    Additionally, I've said it replaces the anchor with the minimal value retrieved by the kernel. Numerically, minimal value is 0, which is coincidentally how black is represented in the image. This means that in your case of a predominantly white image, erosion would "bloat up" the black pixels. Erosion would replace the 255 valued white pixels with 0 valued black pixels if they're in the reach of the kernel. In any case it shouldn't be of a shape (1,1), ever.

    >>> cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
    array([[0, 1, 0],
           [1, 1, 1],
           [0, 1, 0]], dtype=uint8)
    

    If we erode the second image with a 3x3 rectangular kernel we get the image bellow.

    Ok, now we got that out of the way, next thing you do is you find edges using Canny edge detection. The image you get from that is:

    Ok, now we look for EXACTLY vertical and EXACTLY horizontal lines ONLY. Of course there are no such lines apart from the meridian on the left of the image (is that what it's called?) and the end image you get after you did it right would be this:

    Now since you never described your exact idea, and my best guess is that you want the parallels and meridians, you'll have more luck on maps with lesser scale because those aren't lines to begin with, they are curves. Additionally, is there a specific reason to get a Probability Hough done? The "regular" Hough doesn't suffice?

    Sorry for the too-long post, hope it helps a bit.


    Text here was added as a request for clarification from the OP Nov. 24th. because there's no way to fit the answer into a char limited comment.

    I'd suggest OP asks a new question more specific to the detection of curves because you are dealing with curves op, not horizontal and vertical lines.

    There are several ways to detect curves but none of them are easy. In the order of simplest-to-implement to hardest:

    1. Use RANSAC algorithm. Develop a formula describing the nature of the long. and lat. lines depending on the map in question. I.e. latitude curves will almost be a perfect straight lines on the map when you're near the equator, with the equator being the perfectly straight line, but will be very curved, resembling circle segments, when you're at high latitudes (near the poles). SciPy already has RANSAC implemented as a class all you have to do is find and the programatically define the model you want to try to fit to the curves. Of course there's the ever-usefull 4dummies text here. This is the easiest because all you have to do is the math.
    2. A bit harder to do would be to create a rectangular grid and then try to use cv findHomography to warp the grid into place on the image. For various geometric transformations you can do to the grid you can check out OpenCv manual. This is sort of a hack-ish approach and might work worse than 1. because it depends on the fact that you can re-create a grid with enough details and objects on it that cv can identify the structures on the image you're trying to warp it to. This one requires you to do similar math to 1. and just a bit of coding to compose the end solution out of several different functions.
    3. To actually do it. There are mathematically neat ways of describing curves as a list of tangent lines on the curve. You can try to fit a bunch of shorter HoughLines to your image or image segment and then try to group all found lines and determine, by assuming that they're tangents to a curve, if they really follow a curve of the desired shape or are they random. See this paper on this matter. Out of all approaches this one is the hardest because it requires a quite a bit of solo-coding and some math about the method.

    There could be easier ways, I've never actually had to deal with curve detection before. Maybe there are tricks to do it easier, I don't know. If you ask a new question, one that hasn't been closed as an answer already you might have more people notice it. Do make sure to ask a full and complete question on the exact topic you're interested in. People won't usually spend so much time writing on such a broad topic.

    To show you what you can do with just Hough transform check out bellow:

    import cv2
    import numpy as np
    
    def draw_lines(hough, image, nlines):
       n_x, n_y=image.shape
       #convert to color image so that you can see the lines
       draw_im = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR)
    
       for (rho, theta) in hough[0][:nlines]:
          try:
             x0 = np.cos(theta)*rho
             y0 = np.sin(theta)*rho
             pt1 = ( int(x0 + (n_x+n_y)*(-np.sin(theta))),
                     int(y0 + (n_x+n_y)*np.cos(theta)) )
             pt2 = ( int(x0 - (n_x+n_y)*(-np.sin(theta))),
                     int(y0 - (n_x+n_y)*np.cos(theta)) )
             alph = np.arctan( (pt2[1]-pt1[1])/( pt2[0]-pt1[0]) )
             alphdeg = alph*180/np.pi
             #OpenCv uses weird angle system, see: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_houghlines/py_houghlines.html
             if abs( np.cos( alph - 180 )) > 0.8: #0.995:
                cv2.line(draw_im, pt1, pt2, (255,0,0), 2)
             if rho>0 and abs( np.cos( alphdeg - 90)) > 0.7:
                cv2.line(draw_im, pt1, pt2, (0,0,255), 2)    
          except:
             pass
       cv2.imwrite("/home/dino/Desktop/3HoughLines.png", draw_im,
                 [cv2.IMWRITE_PNG_COMPRESSION, 12])   
    
    img = cv2.imread('a.jpg')
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    
    flag,b = cv2.threshold(gray,160,255,cv2.THRESH_BINARY)
    cv2.imwrite("1tresh.jpg", b)
    
    element = np.ones((3,3))
    b = cv2.erode(b,element)
    cv2.imwrite("2erodedtresh.jpg", b)
    
    edges = cv2.Canny(b,10,100,apertureSize = 3)
    cv2.imwrite("3Canny.jpg", edges)
    
    hough = cv2.HoughLines(edges, 1, np.pi/180, 200)   
    draw_lines(hough, b, 100)
    

    As you can see from the image bellow, straight lines are only longitudes. Latitudes are not as straight therefore for each latitude you have several detected lines that behave like tangents on the line. Blue drawn lines are drawn by the if abs( np.cos( alph - 180 )) > 0.8: while the red drawn lines are drawn by rho>0 and abs( np.cos( alphdeg - 90)) > 0.7 condition. Pay close attention when comparing the original image with the image with lines drawn on it. The resemblance is uncanny (heh, get it?) but because they're not lines a lot of it only looks like junk. (especially that highest detected latitude line that seems like it's too "angled" but in reality those lines make a perfect tangent to the latitude line on its thickest point, just as hough algorithm demands it). Acknowledge that there are limitations to detecting curves with a line detection algorithm

    這篇關(guān)于Python OpenCV HoughLinesP 無(wú)法檢測(cè)線的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

    【網(wǎng)站聲明】本站部分內(nèi)容來(lái)源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問(wèn)題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

相關(guān)文檔推薦

How to draw a rectangle around a region of interest in python(如何在python中的感興趣區(qū)域周?chē)L制一個(gè)矩形)
How can I detect and track people using OpenCV?(如何使用 OpenCV 檢測(cè)和跟蹤人員?)
How to apply threshold within multiple rectangular bounding boxes in an image?(如何在圖像的多個(gè)矩形邊界框中應(yīng)用閾值?)
How can I download a specific part of Coco Dataset?(如何下載 Coco Dataset 的特定部分?)
Detect image orientation angle based on text direction(根據(jù)文本方向檢測(cè)圖像方向角度)
Detect centre and angle of rectangles in an image using Opencv(使用 Opencv 檢測(cè)圖像中矩形的中心和角度)
主站蜘蛛池模板: 色av一区二区三区 | 中国一级大毛片 | 91视频中文 | 国产免费一二三区 | 一区二区三区在线电影 | 男人的天堂视频网站 | 精品成人免费一区二区在线播放 | 日本一区二区三区视频在线 | 久久久国产一区二区三区 | 国产亚洲网站 | caoporn免费在线视频 | 国产成人在线视频 | 久草综合在线视频 | 午夜激情小视频 | 男人天堂免费在线 | gogo肉体亚洲高清在线视 | 成人动漫一区二区 | 精品美女在线观看视频在线观看 | 成人在线播放网址 | 免费99视频 | 久久国产三级 | 欧美成人h版在线观看 | 女女百合av大片一区二区三区九县 | av在线播放国产 | 亚洲成人午夜电影 | 奇米影视77 | 欧美国产视频 | 在线国产小视频 | 日韩精品区 | 在线一区| 国产精品日韩欧美一区二区三区 | 国产精品久久久久久网站 | 日韩精品免费 | 91久久国产综合久久 | 日韩欧美在线一区 | 午夜三级网站 | 欧美精品二区 | 一区二区三区四区不卡 | 日韩一区二区三区视频在线观看 | 欧美精品一 | 99爱在线视频 |