久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

python中的軌跡交叉點

Trajectory intersection in python(python中的軌跡交叉點)
本文介紹了python中的軌跡交叉點的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

問題描述

我正在使用 tensorflow 和 python 檢測人員和車輛.我計算軌跡并使用卡爾曼濾波器預測它們,并擬合一條線來預測軌跡.

我的問題是如何找到兩條軌跡之間的交點和碰撞時間?

我嘗試了線到線的交點,但擬合線并不總是兩點線,而是一條折線.這是我的嘗試:

 detections = tracker.update(np.array(z_box))對于檢測中的 trk [0]:trk = trk.astype(np.int32)helpers.draw_box_label(img, trk, trk[4]) # 繪制邊界框centerCoord = (((trk[1] +trk[3])/2), (trk[0] + trk[2])/2)point_lists[trk[4]].append(centerCoord)x = [i[0] for i in point_lists[trk[4]]]y = [i[1] for i in point_lists[trk[4]]]p = np.polyfit(x, y, deg=1)y = p[1] + p[0] * np.array(x)擬合=列表(zip(x,y))cv2.polylines(img, np.int32([fitted]), False, color=(255, 0, 0))對于其他檢測[0]:其他 = other.astype(np.int32)if other[4] != trk[4]: # 檢查自己的 IDx2 = [i[0] for i in point_lists[other[4]]]y2 = [i[1] for i in point_lists[other[4]]]p2 = np.polyfit(x2, y2, deg=1)y2 = p2[1] + p2[0] * np.array(x2)other_fitted = list(zip(x2, y2))if(line_intersection(fitted, other_fitted)):打印(交叉點")別的:print("不是交集")

解決方案

這是一個有點寬泛的話題,所以我將只關注數學/物理部分,因為我感覺 CV/DIP部分已由你們兩個提問者(andre ahmed 和

如前所述,轉換為 3D(項目符號 #2)不是必需的,但它消除了非線性,因此以后可以使用簡單的線性插值/外插大大簡化了事情.

I'm detecting persons and vehicles using tensorflow and python. I calculate the trajectories and predict them using Kalman filter and I fit a line for predicting the trajectory.

My problem is how would I find the intersection and time of collision between the two trajectories ?

I tried line to line intersection but the fitted line is not always a two point lines, it's a polyline. Here is my attempt:

 detections = tracker.update(np.array(z_box))

    for trk in detections[0]:
            trk = trk.astype(np.int32)
            helpers.draw_box_label(img, trk, trk[4])  # Draw the bounding boxes on the
            centerCoord = (((trk[1] +trk[3]) / 2), (trk[0] + trk[2]) / 2)
            point_lists[trk[4]].append(centerCoord)
            x = [i[0] for i in point_lists[trk[4]]]
            y = [i[1] for i in point_lists[trk[4]]]
            p = np.polyfit(x, y, deg=1)
            y = p[1] + p[0] * np.array(x)
            fitted = list(zip(x, y))
            cv2.polylines(img, np.int32([fitted]), False, color=(255, 0, 0))
            for other in detections[0]:
                other = other.astype(np.int32)
                if other[4] != trk[4]: # check for self ID
                    x2 = [i[0] for i in point_lists[other[4]]]
                    y2 = [i[1] for i in point_lists[other[4]]]
                    p2 = np.polyfit(x2, y2, deg=1)
                    y2 = p2[1] + p2[0] * np.array(x2)
                    other_fitted = list(zip(x2, y2))
                    if(line_intersection(fitted, other_fitted)):
                        print("intersection")
                    else:
                        print("not intersection")

解決方案

this is a bit broader topic so I will focus only on the math/physics part as I got the feeling the CV/DIP part is already handled by both of you askers (andre ahmed, and chris burgees).

For simplicity I am assuming linear movement with constant speeds So how to do this:

  1. obtain 2D position of each object for 2 separate frames after known time dt

    so obtain the 2D center (or corner or whatever) position on the image for each object in question.

  2. convert them to 3D

    so using known camera parameters or known bacground info about the scene you can un-project the 2D position on screen into 3D relative position to camera. This will get rid of the non linear interpolations otherwise need if handled just like a 2D case.

    There are more option how to obtain 3D position depending on what you got at your disposal. For example like this:

    • Transformation of 3D objects related to vanishing points and horizon line
  3. obtaining actual speed of objects

    the speed vector is simply:

    vel = ( pos(t+dt) - pos(t) )/dt
    

    so simply subbstract positions of the same object from 2 consequent frames and divide by the framerate period (or interval between the frames used).

  4. test each 2 objects for collision

    this is the funny stuff Yes you can solve a system of inequalities like:

    | ( pos0 + vel0 * t ) - (pos1 + vel1 * t ) | <= threshold
    

    but there is a simpler way I used in here

    • Collision detection between 2 "linearly" moving objects in WGS84

    The idea is to compute t where the tested objects are closest together (if nearing towards eachother).

    so we can extrapolate the future position of each object like this:

    pos(t) = pos(t0) + vel*(t-t0)
    

    where t is actual time and t0 is some start time (for example t0=0).

    let assume we have 2 objects (pos0,vel0,pos1,vel1) we want to test so compute first 2 iterations of their distance so:

    pos0(0) = pos0;
    pos1(0) = pos1;
    dis0 = | pos1(0) - pos0(0) |
    
    pos0(dt) = pos0 + vel0*dt;
    pos1(dt) = pos1 + vel1*dt;
    dis1 = | pos1(dt) - pos0(dt) |
    

    where dt is some small enough time (to avoid skipping through collision). Now if (dis0<dis1) then the objects are mowing away so no collision, if (dis0==dis1) the objects are not moving or moving parallel to each and only if (dis0>dis1) the objects are nearing to each other so we can estimate:

    dis(t) = dis0 + (dis1-dis0)*t
    

    and the collision expects that dis(t)=0 so we can extrapolate again:

    0 = dis0 + (dis1-dis0)*t
    (dis0-dis1)*t = dis0 
    t = dis0 / (dis0-dis1)
    

    where t is the estimated time of collision. Of coarse all this handles all the movement as linear and extrapolates a lot so its not accurate but as you can do this for more consequent frames and the result will be more accurate with the time nearing to collision ... Also to be sure you should extrapolate the position of each object at the time of estimated collision to verify the result (if not colliding then the extrapolation was just numerical and the objects did not collide just was nearing to each for a time)

As mentioned before the conversion to 3D (bullet #2) is not necessary but it get rid of the nonlinearities so simple linear interpolation/extrapolation can be used later on greatly simplify things.

這篇關于python中的軌跡交叉點的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

相關文檔推薦

How to draw a rectangle around a region of interest in python(如何在python中的感興趣區域周圍繪制一個矩形)
How can I detect and track people using OpenCV?(如何使用 OpenCV 檢測和跟蹤人員?)
How to apply threshold within multiple rectangular bounding boxes in an image?(如何在圖像的多個矩形邊界框中應用閾值?)
How can I download a specific part of Coco Dataset?(如何下載 Coco Dataset 的特定部分?)
Detect image orientation angle based on text direction(根據文本方向檢測圖像方向角度)
Detect centre and angle of rectangles in an image using Opencv(使用 Opencv 檢測圖像中矩形的中心和角度)
主站蜘蛛池模板: 特黄毛片视频 | 第一区在线观看免费国语入口 | 国产精品一区二区av | 欧美a在线观看 | 福利在线看 | 亚洲精品免费在线观看 | 激情黄色在线观看 | 欧美日韩中文字幕在线播放 | 久久精品日产第一区二区三区 | 一区二区高清不卡 | 欧美精品一区在线发布 | 国产xxx在线观看 | a爱视频| 欧美中文字幕在线观看 | 亚洲欧美日韩精品久久亚洲区 | 亚洲网站在线观看 | 一本一道久久a久久精品蜜桃 | 亚洲欧美日韩电影 | 欧美精品一区二区三区蜜桃视频 | 国产精品久久久亚洲 | 免费中文字幕 | 91久久综合亚洲鲁鲁五月天 | 日韩一区二区三区四区五区 | 国产精品揄拍一区二区 | 国产免费播放视频 | 精品免费国产一区二区三区四区介绍 | 91大神在线资源观看无广告 | 久久久国产网站 | 精品蜜桃一区二区三区 | 成人av一区 | 免费成人在线网站 | 欧美日韩中文字幕在线 | 亚洲精品视频免费 | 国产精品国产亚洲精品看不卡15 | 欧美成人激情 | 黄久久久 | 久久国产欧美一区二区三区精品 | 一级毛片中国 | 亚洲精品一二区 | 91免费小视频 | 九九热免费观看 |