久久久久久久av_日韩在线中文_看一级毛片视频_日本精品二区_成人深夜福利视频_武道仙尊动漫在线观看

在 C++ 中轉置矩陣的最快方法是什么?

What is the fastest way to transpose a matrix in C++?(在 C++ 中轉置矩陣的最快方法是什么?)
本文介紹了在 C++ 中轉置矩陣的最快方法是什么?的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

問題描述

我有一個需要轉置的矩陣(相對較大).例如假設我的矩陣是

I have a matrix (relatively big) that I need to transpose. For example assume that my matrix is

a b c d e f
g h i j k l
m n o p q r 

我希望結果如下:

a g m
b h n
c I o
d j p
e k q
f l r

最快的方法是什么?

推薦答案

這是個好問題.您想要在內存中實際轉置矩陣而不僅僅是交換坐標的原因有很多,例如在矩陣乘法和高斯拖尾中.

This is a good question. There are many reason you would want to actually transpose the matrix in memory rather than just swap coordinates, e.g. in matrix multiplication and Gaussian smearing.

首先讓我列出我用于轉置的一個函數(請參閱我的答案的結尾,我找到了一個更快的解決方案)

First let me list one of the functions I use for the transpose ( please see the end of my answer where I found a much faster solution)

void transpose(float *src, float *dst, const int N, const int M) {
    #pragma omp parallel for
    for(int n = 0; n<N*M; n++) {
        int i = n/N;
        int j = n%N;
        dst[n] = src[M*j + i];
    }
}

現在讓我們看看為什么轉置很有用.考慮矩陣乘法 C = A*B.我們可以這樣做.

Now let's see why the transpose is useful. Consider matrix multiplication C = A*B. We could do it this way.

for(int i=0; i<N; i++) {
    for(int j=0; j<K; j++) {
        float tmp = 0;
        for(int l=0; l<M; l++) {
            tmp += A[M*i+l]*B[K*l+j];
        }
        C[K*i + j] = tmp;
    }
}

然而,那樣的話,將會有很多緩存未命中.一個更快的解決方案是先對 B 進行轉置

That way, however, is going to have a lot of cache misses. A much faster solution is to take the transpose of B first

transpose(B);
for(int i=0; i<N; i++) {
    for(int j=0; j<K; j++) {
        float tmp = 0;
        for(int l=0; l<M; l++) {
            tmp += A[M*i+l]*B[K*j+l];
        }
        C[K*i + j] = tmp;
    }
}
transpose(B);

矩陣乘法是O(n^3),轉置是O(n^2),所以轉置對計算時間的影響應該可以忽略不計(對于大n).在矩陣乘法循環中,平鋪甚至比轉置更有效,但要復雜得多.

Matrix multiplication is O(n^3) and the transpose is O(n^2), so taking the transpose should have a negligible effect on the computation time (for large n). In matrix multiplication loop tiling is even more effective than taking the transpose but that's much more complicated.

我希望我知道一種更快的轉置方法(我找到了一個更快的解決方案,請參閱我的答案結尾).當 Haswell/AVX2 幾周后出來時,它將具有聚集功能.我不知道這在這種情況下是否會有幫助,但我可以想象收集一列并寫出一行.也許它會使轉置變得不必要.

I wish I knew a faster way to do the transpose ( I found a faster solution, see the end of my answer). When Haswell/AVX2 comes out in a few weeks it will have a gather function. I don't know if that will be helpful in this case but I could image gathering a column and writing out a row. Maybe it will make the transpose unnecessary.

對于高斯涂抹,您所做的是水平涂抹然后垂直涂抹.但是垂直涂抹有緩存問題所以你要做的是

For Gaussian smearing what you do is smear horizontally and then smear vertically. But smearing vertically has the cache problem so what you do is

Smear image horizontally
transpose output 
Smear output horizontally
transpose output

這是英特爾的一篇論文解釋說http:///software.intel.com/en-us/articles/iir-gaussian-blur-filter-implementation-using-intel-advanced-vector-extensions

Here is a paper by Intel explaining that http://software.intel.com/en-us/articles/iir-gaussian-blur-filter-implementation-using-intel-advanced-vector-extensions

最后,我在矩陣乘法(和高斯拖尾)中實際做的不是完全采用轉置,而是采用特定矢量大小(例如,SSE/AVX 為 4 或 8)的寬度的轉置.這是我使用的功能

Lastly, what I actually do in matrix multiplication (and in Gaussian smearing) is not take exactly the transpose but take the transpose in widths of a certain vector size (e.g. 4 or 8 for SSE/AVX). Here is the function I use

void reorder_matrix(const float* A, float* B, const int N, const int M, const int vec_size) {
    #pragma omp parallel for
    for(int n=0; n<M*N; n++) {
        int k = vec_size*(n/N/vec_size);
        int i = (n/vec_size)%N;
        int j = n%vec_size;
        B[n] = A[M*i + k + j];
    }
}

我嘗試了幾個函數來為大矩陣找到最快的轉置.最后,最快的結果是使用帶有 block_size=16 的循環阻塞(我找到了一個使用 SSE 和循環阻塞的更快解決方案 - 見下文).此代碼適用于任何 NxM 矩陣(即矩陣不必是正方形).

I tried several function to find the fastest transpose for large matrices. In the end the fastest result is to use loop blocking with block_size=16 ( I found a faster solution using SSE and loop blocking - see below). This code works for any NxM matrix (i.e. the matrix does not have to be square).

inline void transpose_scalar_block(float *A, float *B, const int lda, const int ldb, const int block_size) {
    #pragma omp parallel for
    for(int i=0; i<block_size; i++) {
        for(int j=0; j<block_size; j++) {
            B[j*ldb + i] = A[i*lda +j];
        }
    }
}

inline void transpose_block(float *A, float *B, const int n, const int m, const int lda, const int ldb, const int block_size) {
    #pragma omp parallel for
    for(int i=0; i<n; i+=block_size) {
        for(int j=0; j<m; j+=block_size) {
            transpose_scalar_block(&A[i*lda +j], &B[j*ldb + i], lda, ldb, block_size);
        }
    }
}

ldaldb 是矩陣的寬度.這些需要是塊大小的倍數.查找值并為例如分配內存一個 3000x1001 的矩陣我做這樣的事情

The values lda and ldb are the width of the matrix. These need to be multiples of the block size. To find the values and allocate the memory for e.g. a 3000x1001 matrix I do something like this

#define ROUND_UP(x, s) (((x)+((s)-1)) & -(s))
const int n = 3000;
const int m = 1001;
int lda = ROUND_UP(m, 16);
int ldb = ROUND_UP(n, 16);

float *A = (float*)_mm_malloc(sizeof(float)*lda*ldb, 64);
float *B = (float*)_mm_malloc(sizeof(float)*lda*ldb, 64);

對于 3000x1001,返回 ldb = 3008 lda = 1008

For 3000x1001 this returns ldb = 3008 and lda = 1008

我找到了一個使用 SSE 內在函數的更快的解決方案:

I found an even faster solution using SSE intrinsics:

inline void transpose4x4_SSE(float *A, float *B, const int lda, const int ldb) {
    __m128 row1 = _mm_load_ps(&A[0*lda]);
    __m128 row2 = _mm_load_ps(&A[1*lda]);
    __m128 row3 = _mm_load_ps(&A[2*lda]);
    __m128 row4 = _mm_load_ps(&A[3*lda]);
     _MM_TRANSPOSE4_PS(row1, row2, row3, row4);
     _mm_store_ps(&B[0*ldb], row1);
     _mm_store_ps(&B[1*ldb], row2);
     _mm_store_ps(&B[2*ldb], row3);
     _mm_store_ps(&B[3*ldb], row4);
}

inline void transpose_block_SSE4x4(float *A, float *B, const int n, const int m, const int lda, const int ldb ,const int block_size) {
    #pragma omp parallel for
    for(int i=0; i<n; i+=block_size) {
        for(int j=0; j<m; j+=block_size) {
            int max_i2 = i+block_size < n ? i + block_size : n;
            int max_j2 = j+block_size < m ? j + block_size : m;
            for(int i2=i; i2<max_i2; i2+=4) {
                for(int j2=j; j2<max_j2; j2+=4) {
                    transpose4x4_SSE(&A[i2*lda +j2], &B[j2*ldb + i2], lda, ldb);
                }
            }
        }
    }
}

這篇關于在 C++ 中轉置矩陣的最快方法是什么?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

相關文檔推薦

Sorting zipped (locked) containers in C++ using boost or the STL(使用 boost 或 STL 在 C++ 中對壓縮(鎖定)容器進行排序)
Rotating a point about another point (2D)(圍繞另一個點旋轉一個點 (2D))
Image Processing: Algorithm Improvement for #39;Coca-Cola Can#39; Recognition(圖像處理:Coca-Cola Can 識別的算法改進)
How do I construct an ISO 8601 datetime in C++?(如何在 C++ 中構建 ISO 8601 日期時間?)
Sort list using STL sort function(使用 STL 排序功能對列表進行排序)
Is list::size() really O(n)?(list::size() 真的是 O(n) 嗎?)
主站蜘蛛池模板: 天天操网| 亚洲日本中文字幕在线 | 国产一级视频 | 天天影视亚洲综合网 | 99精品观看| 男女免费网站 | 免费黄色片视频 | 久久日韩精品 | 国产一区二区在线播放 | 一级毛片免费看 | 日韩精品一区二区三区中文在线 | 成人国产在线观看 | 青青草视频免费观看 | 天天综合久久网 | 久久久网 | 亚洲国产精品视频 | 国产精品美女久久久 | 亚洲国产视频一区二区 | 精品91久久 | 国产精品一区二区三区久久久 | 久久久久亚洲精品国产 | 一二三区视频 | 成人啊啊啊 | 国产91久久久久久 | 日韩视频在线免费观看 | 亚洲成人久久久 | 亚洲欧美一区二区三区国产精品 | 91久久| 在线视频三区 | 一区二区日本 | 欧美高清视频一区 | 国产高清不卡 | 麻豆久久久久久久 | 精品国产网 | 日本免费一区二区三区四区 | 久久久九九 | 四色永久| 久热精品免费 | 毛片在线视频 | 青草视频在线 | 色呦呦网站|