問題描述
我的目標是將沒有窗口的 OpenGL 場景直接渲染到文件中.場景可能比我的屏幕分辨率大.
My aim is to render OpenGL scene without a window, directly into a file. The scene may be larger than my screen resolution is.
我該怎么做?
如果可能,我希望能夠將渲染區域大小選擇為任意大小,例如 10000x10000?
I want to be able to choose the render area size to any size, for example 10000x10000, if possible?
推薦答案
一切都始于 glReadPixels
,您將使用它來將存儲在 GPU 特定緩沖區中的像素傳輸到主內存 (RAM).正如您將在文檔中注意到的那樣,沒有選擇哪個緩沖區的參數.與 OpenGL 一樣,當前要讀取的緩沖區是一個狀態,您可以使用 glReadBuffer
設置該狀態.
It all starts with glReadPixels
, which you will use to transfer the pixels stored in a specific buffer on the GPU to the main memory (RAM). As you will notice in the documentation, there is no argument to choose which buffer. As is usual with OpenGL, the current buffer to read from is a state, which you can set with glReadBuffer
.
因此,一個非常基本的離屏渲染方法將類似于以下內容.我使用 c++ 偽代碼,所以它可能包含錯誤,但應該使一般流程清晰:
So a very basic offscreen rendering method would be something like the following. I use c++ pseudo code so it will likely contain errors, but should make the general flow clear:
//Before swapping
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_BACK);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
這將讀取當前的后臺緩沖區(通常是您正在繪制的緩沖區).您應該在交換緩沖區之前調用它.請注意,您也可以使用上述方法完美讀取后臺緩沖區,在交換之前清除它并繪制完全不同的東西.從技術上講,您也可以讀取前端緩沖區,但通常不鼓勵這樣做,因為理論上允許實現進行一些可能使您的前端緩沖區包含垃圾的優化.
This will read the current back buffer (usually the buffer you're drawing to). You should call this before swapping the buffers. Note that you can also perfectly read the back buffer with the above method, clear it and draw something totally different before swapping it. Technically you can also read the front buffer, but this is often discouraged as theoretically implementations were allowed to make some optimizations that might make your front buffer contain rubbish.
這有一些缺點.首先,我們并沒有真正進行離屏渲染,對吧.我們渲染到屏幕緩沖區并從中讀取.我們可以通過從不交換后臺緩沖區來模擬離屏渲染,但感覺不對.除此之外,前端和后端緩沖區經過優化以顯示像素,而不是讀取它們.這就是 幀緩沖對象發揮作用的地方.
There are a few drawbacks with this. First of all, we don't really do offscreen rendering do we. We render to the screen buffers and read from those. We can emulate offscreen rendering by never swapping in the back buffer, but it doesn't feel right. Next to that, the front and back buffers are optimized to display pixels, not to read them back. That's where Framebuffer Objects come into play.
本質上,FBO 允許您創建一個非默認的幀緩沖區(如 FRONT 和 BACK 緩沖區),允許您繪制到內存緩沖區而不是屏幕緩沖區.在實踐中,您可以繪制紋理或繪制到 renderbuffer.當您想將 OpenGL 本身中的像素重新用作紋理(例如游戲中的天真安全攝像頭")時,第一個是最佳選擇,如果您只想渲染/回讀,后者是最佳選擇.有了這個,上面的代碼就會變成這樣,又是偽代碼,所以如果輸入錯誤或忘記了一些語句,請不要殺我.
Essentially, an FBO lets you create a non-default framebuffer (like the FRONT and BACK buffers) that allow you to draw to a memory buffer instead of the screen buffers. In practice, you can either draw to a texture or to a renderbuffer. The first is optimal when you want to re-use the pixels in OpenGL itself as a texture (e.g. a naive "security camera" in a game), the latter if you just want to render/read-back. With this the code above would become something like this, again pseudo-code, so don't kill me if mistyped or forgot some statements.
//Somewhere at initialization
GLuint fbo, render_buf;
glGenFramebuffers(1,&fbo);
glGenRenderbuffers(1,&render_buf);
glBindRenderbuffer(render_buf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_BGRA8, width, height);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER?,fbo);
glFramebufferRenderbuffer(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, render_buf);
//At deinit:
glDeleteFramebuffers(1,&fbo);
glDeleteRenderbuffers(1,&render_buf);
//Before drawing
glBindFramebuffer(GL_DRAW_FRAMEBUFFER?,fbo);
//after drawing
std::vector<std::uint8_t> data(width*height*4);
glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,&data[0]);
// Return to onscreen rendering:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER?,0);
這是一個簡單的示例,實際上您可能還需要深度(和模板)緩沖區的存儲空間.您可能還想渲染到紋理,但我會將其留作練習.在任何情況下,您現在都將執行真正的離屏渲染,它的工作速度可能比讀取后臺緩沖區要快.
This is a simple example, in reality you likely also want storage for the depth (and stencil) buffer. You also might want to render to texture, but I'll leave that as an exercise. In any case, you will now perform real offscreen rendering and it might work faster then reading the back buffer.
最后,您可以使用像素緩沖對象使讀取像素異步.問題是 glReadPixels
會阻塞直到像素數據完全傳輸,這可能會使您的 CPU 停頓.使用 PBO 時,實現可能會立即返回,因為它無論如何都控制緩沖區.只有當您映射緩沖區時,管道才會阻塞.但是,PBO 可能會優化為僅在 RAM 上緩沖數據,因此該塊可能花費更少的時間.讀取像素代碼會變成這樣:
Finally, you can use pixel buffer objects to make read pixels asynchronous. The problem is that glReadPixels
blocks until the pixel data is completely transfered, which may stall your CPU. With PBO's the implementation may return immediately as it controls the buffer anyway. It is only when you map the buffer that the pipeline will block. However, PBO's may be optimized to buffer the data solely on RAM, so this block could take a lot less time. The read pixels code would become something like this:
//Init:
GLuint pbo;
glGenBuffers(1,&pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, width*height*4, NULL, GL_DYNAMIC_READ);
//Deinit:
glDeleteBuffers(1,&pbo);
//Reading:
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0,0,width,height,GL_BGRA,GL_UNSIGNED_BYTE,0); // 0 instead of a pointer, it is now an offset in the buffer.
//DO SOME OTHER STUFF (otherwise this is a waste of your time)
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo); //Might not be necessary...
pixel_data = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
大寫部分是必不可少的.如果您只是向 PBO 發出 glReadPixels
,然后是該 PBO 的 glMapBuffer
,那么您只會獲得大量代碼.當然 glReadPixels
可能會立即返回,但現在 glMapBuffer
將停止,因為它必須安全地將數據從讀取緩沖區映射到 PBO 和主內存塊內存.
The part in caps is essential. If you just issue a glReadPixels
to a PBO, followed by a glMapBuffer
of that PBO, you gained nothing but a lot of code. Sure the glReadPixels
might return immediately, but now the glMapBuffer
will stall because it has to safely map the data from the read buffer to the PBO and to a block of memory in main RAM.
還請注意,我到處都使用 GL_BGRA,這是因為許多顯卡在內部使用它作為最佳渲染格式(或沒有 alpha 的 GL_BGR 版本).它應該是像這樣的像素傳輸最快的格式.我會試著找到我在幾個月前讀到的關于這個的 nvidia 文章.
Please also note that I use GL_BGRA everywhere, this is because many graphics cards internally use this as the optimal rendering format (or the GL_BGR version without alpha). It should be the fastest format for pixel transfers like this. I'll try to find the nvidia article I read about this a few monts back.
當使用 OpenGL ES 2.0 時,GL_DRAW_FRAMEBUFFER
可能不可用,在這種情況下你應該只使用 GL_FRAMEBUFFER
.
When using OpenGL ES 2.0, GL_DRAW_FRAMEBUFFER
might not be available, you should just use GL_FRAMEBUFFER
in that case.
這篇關于如何在 OpenGL 上渲染離屏?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!