How to render ultra-realistic looking models and textures in Blender? [closed]

I know I know, I sculpt a model, bake the textures >apply them and that’s it.

Come on, that can not be everything when it comes to ultra-realistic 3d models.

I have seen guys creating textures with applying 102381028310 stuff, like sub-surface scattering, then some extra layers of textures, once zoomed in, a user can see more textres that are like small and normally not rendered from far away.

Can you just make a list of such things that would not be included in the >bake them, difuse, AO, spec normal maps> apply them and that’s it.

So any extra technique that you know, please post it. Yes I know lightning is important but that is a whole different question. haha I probably sound a little bit irritated in this question, that is because I already asked in like 5 forums and no one is able to tell me some of the other techniques or whatever, so that I know where to expand my knowledge.

Why does the triangle rendered by OpenGL ES 2.0 , with SDL 2.0 context, vanishes after a single render, if events are not polled?

I was experimenting with OpenGL ES 2.0 and being new to OpenGL, I was trying to render a simple triangle. But I was shocked to see that, if I do not call SDL_PollEvent(...) after glDrawArrays(...) in the game loop, I see the triangle render on the screen for a split second and then it vanishes altogether ! But, if I call SDL_PollEvent then everything is fine ! Can anyone explain to me the reason for this abnormal behavior???

However, this is the interesting part of my code:

This code works perfectly, if I uncomment the commented block of code:

uint32_t vbo;  glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);  glEnableVertexAttribArray(pos); glVertexAttribPointer(pos, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void*)0);  bool run = true; SDL_Event e; while (run) {     glDrawArrays(GL_TRIANGLES, 0, 3);   SDL_GL_SwapWindow(window);   /*while(SDL_PollEvent(&e)) {       switch(e.type)       {            case SDL_QUIT:                 run = false;                 break;        } } */ 


Vertex Shader:

precision mediump float;  attribute vec3 pos; void main() {     gl_Position = vec4(, 1.0);  } 

Fragment Shader:

precision mediump float;  void main() {       gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);  } 

Every help will be greatly appreciated, Thankyou everyone in advance !

Does Fog Cloud render a Beholder’s eyestalks ineffective?

The fog cloud spell creates “heavily obscured” vision, which effectively acts like the blinded condition for characters in it per this answer.

A beholder’s eye stalks (Monster Manual, p. 28) can target “one to three targets it can see within 120 feet of it.” Thus like many spells, this seems to suggest it must see its target.

The central eye of the Beholder has an Antimagic Cone (essentially an antimagic field) in a 150 foot cone. Antimagic Fields simply temporarily suppress magical effects into which they come in contact. As the central eye turns away, the Antimagic Cone will sweep away in another direction and the fog cloud returns as suggested by the answer to this question.

If, however, the Beholder tries to use its eyestalk’s spell effects to target something it can see in the cone, the Antimagic Cone prevents the magic from working.

Lastly, the Beholder has no dispel magic spells for the fog cloud.

In effect, with fog cloud, the Beholder either cannot see the characters to target them or when it can see the characters its eye stalk spells do not function.

Are we interpreting this correctly or is there some other way for the Beholder to use its spells in a fog cloud?

DirectX/SharpDX How to render bitmap drawing to texture off-screen?

This isn’t specifically related to gamedev, but SO seems dead when it comes to questions about DirectX so I thought I’d try asking here:

I’ve been making changes to a desktop recording project found here which uses OutputDuplication. This project originally records the entire desktop to video, my changes instead allow specifying a region of the desktop to record.

My main change to achieve this is implementing a staging texture, and copying frame output to a render texture, which then has the mouse pointer drawn to it via render target, and then copy subresource region of that render texture to the staging texture which then gets written to sink writer.

This works fine, but the original project renders this using a swapchain with a window handle render target. I want to remove the use of this swapchain and render off-screen instead as I don’t want to display the output in window, I only want it written to mp4 file with sink writer, how do I achieve this correctly?

Here is the relevant code for getting the output frame:

private void AcquireFrame() {     SharpDX.DXGI.Resource frame;     OutputDuplicateFrameInformation frameInfo;     OutputDuplication.Value.AcquireNextFrame(500, out frameInfo, out frame);      using(frame) {         if(frameInfo.LastMouseUpdateTime != 0) {             Cursor.Visible = frameInfo.PointerPosition.Visible;             Cursor.Position = frameInfo.PointerPosition.Position;             Cursor.BufferSize = frameInfo.PointerShapeBufferSize;             ComputeCursorBitmap(); //Retrieves data related to mouse pointer and populates Cursor variables         }          //if(frameInfo.LastPresentTime != 0) {         //    swapChain.Value.Present(1, 0); //Removing use of swapchain         //}          if(RecordingState == DuplicatorState.Started && (frameInfo.LastPresentTime != 0 | (frameInfo.LastPresentTime != 0 && Cursor.Bitmap != null))) {             var renderDescription = new Texture2DDescription() {                 CpuAccessFlags = CpuAccessFlags.None,                 BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,                 Format = Format.B8G8R8A8_UNorm,                 Width = SourceSize.Width,                 Height = SourceSize.Height,                 OptionFlags = ResourceOptionFlags.None,                 MipLevels = 1,                 ArraySize = 1,                 SampleDescription = { Count = 1, Quality = 0 },                 Usage = ResourceUsage.Default,             };             videoFrame.RenderTexture = new Texture2D(OutputDevice.Value, renderDescription);              var stagingDescription = new Texture2DDescription() {                 CpuAccessFlags = CpuAccessFlags.Read,                 BindFlags = BindFlags.None,                 Format = Format.B8G8R8A8_UNorm,                 Width = RegionSize.Width,                 Height = RegionSize.Height,                 OptionFlags = ResourceOptionFlags.None,                 MipLevels = 1,                 ArraySize = 1,                 SampleDescription = { Count = 1, Quality = 0 },                 Usage = ResourceUsage.Staging,             };             videoFrame.StagingTexture = new Texture2D(OutputDevice.Value, stagingDescription);              using(var res = frame.QueryInterface<SharpDX.Direct3D11.Resource>()) {                 OutputDevice.Value.ImmediateContext.CopyResource(res, videoFrame.RenderTexture);                  if(Cursor.Visible) {                     using(var fac = new SharpDX.Direct2D1.Factory1()) {                         using(var dxDev = OutputDevice.Value.QueryInterface<SharpDX.DXGI.Device>()) {                             using(var dc = new SharpDX.Direct2D1.DeviceContext(new SharpDX.Direct2D1.Device(fac, dxDev), DeviceContextOptions.EnableMultithreadedOptimizations)) {                                 using(var surface = videoFrame.RenderTexture.QueryInterface<Surface>()) {                                     using(var bmp = new Bitmap1(dc, surface)) {                                         using(Cursor.Bitmap = new Bitmap(dc, Cursor.Size, Cursor.DataPointer, Cursor.Pitch, Cursor.Props)) {                                             dc.Target = bmp;                                             dc.BeginDraw();                                                                                          int captureX = ((Cursor.Position.X - Cursor.Hotspot.X) * SourceSize.Width) / SourceSize.Width;                                             int captureY = ((Cursor.Position.Y - Cursor.Hotspot.Y) * SourceSize.Height) / SourceSize.Height;                                             RawRectangleF rect = new RawRectangleF(captureX, captureY, captureX + (Cursor.Size.Width * SourceSize.Width) / SourceSize.Width, captureY + (Cursor.Size.Height * SourceSize.Height) / SourceSize.Height);                                                                                          dc.DrawBitmap(Cursor.Bitmap, rect, 1, BitmapInterpolationMode.NearestNeighbor);                                                                                          //dc.Flush();                                             dc.EndDraw();                                         }                                     }                                 }                             }                         }                     }                 }                  OutputDevice.Value.ImmediateContext.CopySubresourceRegion(videoFrame.RenderTexture, 0, new ResourceRegion(RegionLocation.X, RegionLocation.Y, 0, RegionSize.Width, RegionSize.Height, 1), videoFrame.StagingTexture, 0, 0, 0);                 videoFrame.RenderTexture.Dispose();             }              WriteVideoSample(videoFrame);         }         OutputDuplication.Value.ReleaseFrame();     } } 

If I remove the swapchain, drawing the bitmap of the mouse pointer to the render texture surface on the Direct2D1 device context breaks. It appears as if it stutters with drawing it, it doesn’t update smoothly each frame.

I found an answer related to my problem here but when I try to add dc.Flush(); after drawing the mouse pointer bitmap, it doesn’t seem to have any affect on the problem.

Using several render textures at the same time during runtime

I want to achieve a system where I can use render textures as portraits/icons. So when I select something I want to show it in my UI, currently I do:

  1. Spawn my prefab with the model (I have one of these prefabs for each model), in my case this prefab has a camera (for render texture), a model, and a spotlight.
  2. Show this render texture in my UI

This is fairly straight forward when I only need one render texture at a time. But say I select a house and want to see all residents, now I need 6-10 render textures all showing different models.

Should I create/destroy render texture assets at runtime for this type of feature, since each camera would need its own render texture? Im worried its an expensive feature that costs more than its worth perhaps.

Or do I need to create one render texture asset for each model in my game, and simply point to it in the prefab mentioned above?

Is there a smarter way?

Bind render result to texture id

I want to save the result screen of the rendering and then apply another shader on that result, the typical way is to read the screen using glReadPixels and then buffer that image to gpu and then apply my effect, so is there a way to bind the screen result to a texture id directory in gpu, instead of retrieving it back to CPU and then buffer it to GPU ?

How to generate a star onto a render texture with spherical warping

How would one proceduraly generate a star in a compute shader that looks like one of thes two at any size needed. Also any way transfer this into a spherical map, would be appreciated. Goal is to create a spherical skybox of stars, (stars a pre generated, and not just decoration).

So far got accurate positioning of the stars on the spherical skybox but lack the equations to get it looking like I want.

Preferably the one on the right.

enter image description here enter image description here

This below is what I currently have, ~~5 to 15ms processing time, little over 30k stars

enter image description here

Using Unity2019.3.1f1, needs to be compute shader compatible. (if not I will convert it somehow) Render Texture output.