久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

選擇/歸一化目標(biāo)檢測(cè)的 HoG 參數(shù)?

Choosing/Normalizing HoG parameters for object detection?(選擇/歸一化目標(biāo)檢測(cè)的 HoG 參數(shù)?)
本文介紹了選擇/歸一化目標(biāo)檢測(cè)的 HoG 參數(shù)?的處理方法,對(duì)大家解決問(wèn)題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)吧!

問(wèn)題描述

我正在使用 HoG 功能通過(guò)分類進(jìn)行對(duì)象檢測(cè).

I'm using HoG features for object detection via classification.

我對(duì)如何處理不同長(zhǎng)度的 HoG 特征向量感到困惑.

I'm confused about how to deal with HoG feature vectors of different lengths.

我使用大小相同的訓(xùn)練圖像訓(xùn)練了我的分類器.

I've trained my classifier using training images that all have the same size.

現(xiàn)在,我正在從圖像中提取要在其上運(yùn)行分類器的區(qū)域 - 例如,使用滑動(dòng)窗口方法.我提取的一些窗口比訓(xùn)練分類器的圖像大小要大得多.(它是根據(jù)測(cè)試圖像中可能預(yù)期的最小物體尺寸進(jìn)行訓(xùn)練的).

Now, I'm extracting regions from my image on which to run the classifier - say, using the sliding windows approach. Some of the windows that I extract are a lot bigger than the size of images the classifier was trained on. (It was trained on the smallest possible size of the object that might be expected in test images).

問(wèn)題是,當(dāng)我需要分類的窗口大于訓(xùn)練圖像大小時(shí),HoG 特征向量也比訓(xùn)練模型的特征向量大很多.

The problem is, when the windows I need to classify are bigger than the training image sizes, then the HoG feature vector is also much bigger than the trained model's feature vector.

那么如何使用模型的特征向量對(duì)提取窗口進(jìn)行分類呢?

So how can I use the model's feature vector to classify the extract window?

例如,讓我們?nèi)∫粋€(gè)提取窗口的尺寸,即 360x240,并將其命名為 extractedwindow.然后讓我們?nèi)∫粡埼业挠?xùn)練圖像,它只有 20x30,并將其命名為 trainingsample.

For example, let's take the dimensions of one extracted window, which is 360x240, and call it extractedwindow. Then let's take one of my training images, which is only 20x30, and call it trainingsample.

如果我取 HoG 特征向量,像這樣:

If I take the HoG feature vectors, like this:

fd1, hog_image1 = hog(extractedwindow, orientations=8, pixels_per_cell=(16, 16), cells_per_block=(1, 1), visualise=True, normalise=True)

fd2, hog_image2 = hog(trainingsample, orientations=8, pixels_per_cell=(16, 16), cells_per_block=(1, 1), visualise=True, normalise=True)

print len(fd1)
print len(fd2)

那么這就是特征向量之間的長(zhǎng)度差:

Then this is the difference in length between the feature vectors:

2640
616

那么這是如何處理的呢?提取的窗口是否應(yīng)該按比例縮小到訓(xùn)練分類器的樣本大小?還是應(yīng)該根據(jù)每個(gè)提取的窗口更改/歸一化 HoG 特征的參數(shù)?還是有其他方法可以做到這一點(diǎn)?

So how is this dealt with? Are extracted windows supposed to be scaled down to the size of the samples the classifier was trained on? Or should the parameters for HoG features be changed/normalized according to each extracted window? Or is there another way to do this?

我個(gè)人在 python 中工作,使用 scikit-image,但我想這個(gè)問(wèn)題與我使用的平臺(tái)無(wú)關(guān).

I'm personally working in python, using scikit-image, but I guess the problem is independent of what platform I'm using.

推薦答案

正如你所說(shuō),HOG 基本上使用一個(gè)參數(shù)來(lái)確定單元格大小(以像素為單位).所以如果圖像大小發(fā)生變化,那么單元格的數(shù)量不同,那么描述符的大小就不同.

As you say, HOG basically uses a parameter that establishes the cell size in pixels. So if the image size changes, then the number of cells is different and then the descriptor is different in size.

主要做法是使用HOG就是使用像素大小相同的窗口(訓(xùn)練期間和測(cè)試期間的大小相同).所以extracted window應(yīng)該和trainingsample大小一樣.

The main approach is to use HOG is to use windows with the same size in pixels (the same size during training and also during testing). So extracted window should be the same size of trainingsample.

在那個(gè)參考中,一位用戶說(shuō):

In that reference, one user says:

HOG 不是尺度不變的.獲得相同長(zhǎng)度的特征向量每張圖片不保證尺度不變性.

HOG is not scale invariant. Getting the same length feature vector for each image does not guarantee the scale invariance.

所以你應(yīng)該使用相同的窗口大小...

So you should use the same window size...

這篇關(guān)于選擇/歸一化目標(biāo)檢測(cè)的 HoG 參數(shù)?的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

【網(wǎng)站聲明】本站部分內(nèi)容來(lái)源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問(wèn)題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

相關(guān)文檔推薦

How to draw a rectangle around a region of interest in python(如何在python中的感興趣區(qū)域周圍繪制一個(gè)矩形)
How can I detect and track people using OpenCV?(如何使用 OpenCV 檢測(cè)和跟蹤人員?)
How to apply threshold within multiple rectangular bounding boxes in an image?(如何在圖像的多個(gè)矩形邊界框中應(yīng)用閾值?)
How can I download a specific part of Coco Dataset?(如何下載 Coco Dataset 的特定部分?)
Detect image orientation angle based on text direction(根據(jù)文本方向檢測(cè)圖像方向角度)
Detect centre and angle of rectangles in an image using Opencv(使用 Opencv 檢測(cè)圖像中矩形的中心和角度)
主站蜘蛛池模板: 欧美日韩国产一区二区三区不卡 | 日韩欧美国产一区二区三区 | 天堂网中文 | 看av网址 | 成人精品国产一区二区4080 | 青青草一区二区三区 | 精品欧美乱码久久久久久1区2区 | 国产一区二区三区四区五区加勒比 | 国产成人av电影 | 国产真实精品久久二三区 | 在线观看国产视频 | 欧美日韩国产精品一区二区 | 日日噜噜噜夜夜爽爽狠狠视频, | 午夜精品一区 | 日韩在线欧美 | 99re在线视频 | 欧美a区| 一区二区三区免费 | 国产99精品 | a级大片免费观看 | 国产在线小视频 | 精品国产乱码久久久久久88av | 高清视频一区二区三区 | 久久夜视频 | 91久久久久久久久久久久久 | 欧美精品第一页 | 日韩精品一区二区三区中文字幕 | 精品日韩一区 | 欧美国产精品 | 免费视频一区 | 夜久久| 久久久夜夜夜 | 欧美日韩在线播放 | 中文字幕国产精品视频 | 99久久影院 | 色狠狠一区 | 91香蕉| 青青草av在线播放 | 久久精选| 亚洲欧洲精品成人久久奇米网 | 免费h在线 |