久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

如何獲得正確的 alpha 值以完美融合兩個圖像?

How to obtain the right alpha value to perfectly blend two images?(如何獲得正確的 alpha 值以完美融合兩個圖像?)
本文介紹了如何獲得正確的 alpha 值以完美融合兩個圖像?的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

問題描述

我一直在嘗試混合兩個圖像.我目前采用的方法是,我獲取兩個圖像的重疊區域的坐標,并且僅對于重疊區域,我在添加之前與 0.5 的硬編碼 alpha 混合.所以基本上我只是從兩個圖像的重疊區域中獲取每個像素值的一半,然后添加它們.這并沒有給我一個完美的融合,因為 alpha 值被硬編碼為 0.5.這是 3 張圖像混合的結果:

如您所見,從一張圖像到另一張圖像的過渡仍然可見.如何獲得可以消除這種可見過渡的完美 alpha 值?還是沒有這樣的事情,我采取了錯誤的方法?

這是我目前進行混合的方式:

for i in range(3):base_img_warp[overlap_coords[0],overlap_coords[1],i] = base_img_warp[overlap_coords[0],overlap_coords[1],i]*0.5next_img_warp[overlap_coords[0],overlap_coords[1],i] = next_img_warp[overlap_coords[0],overlap_coords[1],i]*0.5final_img = cv2.add(base_img_warp, next_img_warp)

如果有人想試一試,這里有兩張扭曲的圖像,以及它們重疊區域的蒙版:

混合蒙版(只是作為一種印象,必須是浮點矩陣):

圖像馬賽克:

I've been trying to blend two images. The current approach I'm taking is, I obtain the coordinates of the overlapping region of the two images, and only for the overlapping regions, I blend with a hardcoded alpha of 0.5, before adding it. SO basically I'm just taking half the value of each pixel from overlapping regions of both the images, and adding them. That doesn't give me a perfect blend because the alpha value is hardcoded to 0.5. Here's the result of blending of 3 images:

As you can see, the transition from one image to another is still visible. How do I obtain the perfect alpha value that would eliminate this visible transition? Or is there no such thing, and I'm taking a wrong approach?

Here's how I'm currently doing the blending:

for i in range(3):
            base_img_warp[overlap_coords[0], overlap_coords[1], i] = base_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5
            next_img_warp[overlap_coords[0], overlap_coords[1], i] = next_img_warp[overlap_coords[0], overlap_coords[1],i]*0.5
final_img = cv2.add(base_img_warp, next_img_warp)

If anyone would like to give it a shot, here are two warped images, and the mask of their overlapping region: http://imgur.com/a/9pOsQ

解決方案

Here is the way I would do it in general:

int main(int argc, char* argv[])
{
    cv::Mat input1 = cv::imread("C:/StackOverflow/Input/pano1.jpg");
    cv::Mat input2 = cv::imread("C:/StackOverflow/Input/pano2.jpg");

    // compute the vignetting masks. This is much easier before warping, but I will try...
    // it can be precomputed, if the size and position of your ROI in the image doesnt change and can be precomputed and aligned, if you can determine the ROI for every image
    // the compression artifacts make it a little bit worse here, I try to extract all the non-black regions in the images.
    cv::Mat mask1;
    cv::inRange(input1, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask1);
    cv::Mat mask2;
    cv::inRange(input2, cv::Vec3b(10, 10, 10), cv::Vec3b(255, 255, 255), mask2);


    // now compute the distance from the ROI border:
    cv::Mat dt1;
    cv::distanceTransform(mask1, dt1, CV_DIST_L1, 3);
    cv::Mat dt2;
    cv::distanceTransform(mask2, dt2, CV_DIST_L1, 3);

    // now you can use the distance values for blending directly. If the distance value is smaller this means that the value is worse (your vignetting becomes worse at the image border)
    cv::Mat mosaic = cv::Mat(input1.size(), input1.type(), cv::Scalar(0, 0, 0));
    for (int j = 0; j < mosaic.rows; ++j)
    for (int i = 0; i < mosaic.cols; ++i)
    {
        float a = dt1.at<float>(j, i);
        float b = dt2.at<float>(j, i);

        float alpha = a / (a + b); // distances are not between 0 and 1 but this value is. The "better" a is, compared to b, the higher is alpha.
        // actual blending: alpha*A + beta*B
        mosaic.at<cv::Vec3b>(j, i) = alpha*input1.at<cv::Vec3b>(j, i) + (1 - alpha)* input2.at<cv::Vec3b>(j, i);
    }

    cv::imshow("mosaic", mosaic);

    cv::waitKey(0);
    return 0;
}

Basically you compute the distance from your ROI border to the center of your objects and compute the alpha from both blending mask values. So if one image has a high distance from the border and other one a low distance from border, you prefer the pixel that is closer to the image center. It would be better to normalize those values for cases where the warped images aren't of similar size. But even better and more efficient is to precompute the blending masks and warp them. Best would be to know the vignetting of your optical system and choose and identical blending mask (typically lower values of the border).

From the previous code you'll get these results: ROI masks:

Blending masks (just as an impression, must be float matrices instead):

image mosaic:

這篇關于如何獲得正確的 alpha 值以完美融合兩個圖像?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

相關文檔推薦

How to draw a rectangle around a region of interest in python(如何在python中的感興趣區域周圍繪制一個矩形)
How can I detect and track people using OpenCV?(如何使用 OpenCV 檢測和跟蹤人員?)
How to apply threshold within multiple rectangular bounding boxes in an image?(如何在圖像的多個矩形邊界框中應用閾值?)
How can I download a specific part of Coco Dataset?(如何下載 Coco Dataset 的特定部分?)
Detect image orientation angle based on text direction(根據文本方向檢測圖像方向角度)
Detect centre and angle of rectangles in an image using Opencv(使用 Opencv 檢測圖像中矩形的中心和角度)
主站蜘蛛池模板: 国产精品日本一区二区在线播放 | 精品久久久久久久人人人人传媒 | 国产欧美一区二区三区久久人妖 | 欧美日韩中文国产一区发布 | 日韩精品视频一区二区三区 | 伊人久久伊人 | 美女张开腿露出尿口 | 日韩精品一区二区三区视频播放 | 国产精品视频网站 | 午夜影院官网 | 日本 欧美 国产 | 国产精品久久久久久久 | 国产成人在线视频 | 在线视频a | 亚洲一区二区精品视频 | 亚洲精品第一 | 四虎影视1304t | 国产伊人精品 | 欧美亚洲视频 | 国产欧美在线视频 | 午夜激情小视频 | 亚洲欧洲国产视频 | 成人福利视频 | 成人精品啪啪欧美成 | 欧美综合在线观看 | 国产探花在线精品一区二区 | 国产视频精品免费 | 日韩成人av在线 | 亚洲第一av | 成人免费三级电影 | 狠狠色狠狠色综合系列 | 国产精品精品久久久久久 | 欧美一区二区三区日韩 | 日日干日日 | 91视频在线观看 | 成人在线视频免费观看 | 老司机久久 | 国产成人a亚洲精品 | 精品国产成人 | 久久精品欧美一区二区三区不卡 | 精品亚洲一区二区 |