For direct access use https://forums.oldunreal.com
It's been quite a while since oldunreal had an overhaul, but we are moving to another server which require some updates and changes. The biggest change is the migration of our old reliable YaBB forum to phpBB. This system expects you to login with your username and old password known from YaBB.
If you experience any problems there is also the usual "password forgotten" function. Don't forget to clear your browser cache!
If you have any further concerns feel free to contact me: Smirftsch@oldunreal.com

[WIP] Concept for my Unreal Engine 1 Rendering Ambitions.

Questions, Tips&Tricks for the special patches
Locked
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

[WIP] Concept for my Unreal Engine 1 Rendering Ambitions.

Post by han »

Goals
n/a


Key Issues
n/a


OpenGL Requirements
n/a

Folklore
I've been working on rendering for about two years. It started after I build headers for Nerf Arena Blast. At that point assuming that someone would pick up the headers and continue developing/building render devices for that game. Basically I used the ut432pubsrc D3DDrv and the ut436-opengldrv-src-090602 for getting the headers rights for the rendering part.

Afterwards I did also build utglr/utd3d and d3d10drv. utglr/utd3d clipped of parts of the weapon model, d3d10drv, after beeing a pain in the ass to build worked fine despite the known Nvidia issues. The D3DDrv and the ut436-opengldrv-src-090602 on the other hand performed fairly well. D3DDrv's rendering was correct, though it didn't support really high resolution textures and increasing the define for the maximum supported texture resolution has some rather low limits. It also didn't support DXT1. The ut436-opengldrv-src-090602 on the other hand supported DXT1, performed nice and just had minor masking issues with precaching on for some textures. I sticked to these two sources first, but dropping D3DDrv at some point later on.

While these were working fine ingame the ortho editor views were unusable, so Rajada kept on nagging me to make these work in the OpenGLDrv. After some (to be precise: an awful lot) trial and error I got them into some usuable state, though there were more then a dozen of other Editor related issues remaining. Somehow I kept on addressing these issues and in the meanwhile worked on cleaning up the source. Though the ut436-opengldrv-src-090602 is all in all some solid base to start with and times more cleaner then the ut432pubsrc's opengldrv with already much of legacy 16 bit rendering and other shit removed, but it still had support for extension which were virtually unavailable even at the time it was written, some extensions beeing mixed up (checked for one, but actually using another one, etc.).

However the biggest issue (which one always faces with the stock URenderDevice interface) was that the code had lot of state change/coordination beeing missing, which was especially pronounced inside the UnrealEd for Deus Ex. Up to that point it was merely just a bit of effort put into the OpenGLDrv source and merely just some minor side project which I never intendend to push really far.

At the end of that year I made my first contact to the Render (as in URenderBase/URender) side of rendering pipeline -- thats the part where the high level rendering logic happens (although it contains an ridiculous amount of low level logic) -- compared to the RenderDevices (as in URenderDevice, UD3DRenderDevice, USoftwareRenderDevice, UOpenGLRenderDevice, .. ) which basically just put out some primitives.

There was this Revision map which blacked out and crashed, spamming a weired "Anomalous singularity in [noparse]URender::DrawWorld[/noparse]" detected. So Björn asked me whether I could look into this issue, so I started to make up a URender subclass called URenderExt using the headers found in Deus Ex SDK for the Render package, this precious finding. It was a pain in the ass to figure out, because it crashed in unrelated places and iirc was still about 1-2 months B.S. (before chatting with Smirftsch the first time). Despite fixing this major issue (which was also the #2 crash issue for UnrealEd1) I otherwise just did minor changes there.

I spent a few months afterwards mostly on miscellaneous (non rendering) stuff afterwards with a bit of further work on my OpenGLDrv, but since then my focus went almost exclusive to rendering or editing related work ("a commandlet a day keeps the sorrows away..."). Writing the commandlet/code for exporting UMesh/ULodMesh to 3DS with all the nasty caveats also gave valueable inside about mesh related storage and issues.

At the same time my prior mostly editor support focus on my OpenGLDrv turned into working towards a clean/flexible/extensible source to be used later when I did expand the scope to also include the Render implementation.

My original plan was actually to first get the OpenGLDrv into some fine releasable state, but the gamma related issues (and partially solving these) would have either required me to either make a large step forward or just skip some major OpenGLDrv only release before lighting in Render and resource workflow/tools is implemented. But as I was working for months towards tackling the Render scope and a RenDev only release would have been nice, but as it was by no means some milestone or goal, I plain ditched that plan.

So currently I'm mostly occupied with moving low level code out of Render to RenDev/Shader scope (Mesh clipping on scene node plain, environment mapping, ScaleGlow, AmbientGlow, etc. etc.) and I'm on the brink on using my internel interfaces I use for abstraction to successfully deal with that either a surface, gouraud, tile, etc. draw call can happen at any time straight out of the Render, which would in turn eliminate an awful lot of change/state tracking code in RenDev which was required before to make it resonable fast, but that code is to some degree also wasting performance and even worse makes things way more complicated then they need to be.

A couple of days I started to do further reseach (mostly reading papers about shadow mapping, OIT, etc.) to get some more detailed knowledge for planing some overall design of the rendering pipeline. So I got some slight desire to start working on some written concept, especially those (mostly local) parts which are already rather fixed. Also writing a concept down is always a great way to revise, straighted and clean up a concept. The overall document (these posts) will probably remain incomplete for quite a while but should get filled over time.
Last edited by han on Mon May 30, 2016 7:40 pm, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

Rendering Pipeline Overview

Post by han »

tbd.
Last edited by han on Thu Jun 09, 2016 6:09 pm, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

Certain Aspects of Rendering

Post by han »

Surface Rendering
One of the problems of surface rendering is that each surface just has a single normal, which in turn results in hard lighting changes between surfaces which is less appealing. One idea would be to tesselate off a small band at the side of each surface while staying on the surface plane. Each vertex inside the surface gets the original surface normal, while each vertex normal on the sides of the surface gets averaged with the adjacents surface's normal. The normals get interpolated for lighting. While this will maintain the overall flat apperance of the surface it will smooth out the sudden lighting change between surfaces.



Mesh Rendering



Spite Rendering
Sprites should not get rendered over DrawTile anymore. Instead a dedicated Sprite rendering interface is desired for several reasons.

Sprites should essentially be affected in the same way by ScaleGlow, AmbientGlow, Zone's ambient light, unlit, selection, etc. as a triangle on a mesh as far as possible as they both represent an Actor.

Lighting should be performed on Sprites, unless they are unlit. Volumetric fog should always be applied to Sprites. However lighting on a sprite is by itsself a bit more complicated.

A naive approach of using a single normal facing the player would probably not yield convincing results.

One idea would be to regard the sprite as always facing the lightsource (e.g. the dot product in phong diffuse term is always 1). One advantage of this is that diffuse lighting would not depend on the view position, one downside is that it can produce to high lighting values which might appear out of place. One way could be some ad initio scaling value like. A plausible value could be sin(pi/4)~0.7 assuming that the sprite is always in some 45° degree angle to the light source, but this likely end up in trying which values work best.

An interessting approach would be to take the normals for lighting the sprite out of a half sphere or a parabola facing the player. So the normal on the left side of the sprite would face to the left, in the middle to the player and so forth. So light which would hit the sprite off the left would light the left side. I expect this approach to deliver nice results in a lot of cases, but I also expect it to fail badly for certain other cases, though one could use PF_Flat to indicate to use another normal generation approach such as one of the above for those pathetic cases.

For shadow casting a sprite should face the light source. This makes the shadow view independent and also prevents shadows degenerating into lines.

Another resonable change is to honor Texture.DrawScale for Sprite rendering. This in turn allows easier replacement of Sprite textures as a higher resolution replacement can simply set a lower drawscale to cover the same area as the old texture and thus eleminating any need for code or defaultproperty adjustments to account for a changed texture resolution.

Another issue to discuss is Sprite rejection. Currently it is done by calculating the screen coordinates of the sprite and doing the rejection based on that. Thats a lot of math and a lot of branching, which makes it just awkward slow. Also, for the animated sprite draw types this also involves a lot of extra slow handling of figuring out the texture to actually draw just to get it's size, even if it wouldn't be drawn at all.

For the second part it is resonable to require that the sizes of frames of an animated textures must match for rejection to work correct. But this requirement, which should be virtually always be fulfilled, would allows further optimisations as storing all animation frames into an array texture. This would allow a clean and fast handling of bParticles and bRandomFrame meshes.

The first part breaks down to a fast calculation of the bounding box to do rejection (if desired) in world space, which is nothing more then to calculate the radius of a circle to enclose a rectangle.

So the naiv approach would be to just calculate R = sqrt((width/2)^2+(height/2)), but that would involve a slow sqrt operation.

For quadratic textures this relation would get: R = width/sqrt(2) = (width+height)/2/sqrt(2). Thats nice thats just an addition and multiplication with a contant. For non quadratic textures the formula underestimates the radius, for for 1:2 textures just by about 6%. For 1:8 textures a correction factor of 1.27 yields no false negatives, and 1:8 ratio is probably the worst it can get for sprites anyway. This factor should be fine as one shouldn't get that much false positives due to it anyway. The next best option would be to do some taylor approximation, but that would introduce a quadratic term, which wouldn't be worth it.

So the final result for calculating the bounding sphere for a sprite is: R = (width+height)*1.27/2/sqrt(2) = (width+height)*0.45.

The rejection is just about calculating five planedots, which is less then the cost it currently takes alone to project the sprite into view space.



Decal Rendering
There are basically two kind of approaches: Combining the Decal material with the material of the primitive currently rendered and thus making it a part of the primitive surface itsself or a post drawing approaches, like either using some drawing slightly over the surface or some projector approach.

The last approach is somewhat more flexibel when it comes to what kind of thing is currently drawn, but it impractical to handle on transparent objects. Especially if order-independent-transparency approaches like depth peeling are used they might easily multiple the time spent on it and face other issues of not beeing part of the object itsself.

This first approach would just work for BSP surfaces, but it would on one hand save a large amount of cpu heavy code on clipping the decal to the bsp surface, frequent shader switches, and so on. Another visual advantage is that because it is baked into the surface rendered itsself it naturally integrates into lighting and even more important with volumetric fog. It also doesn't cause any additional problems for transparent surfaces.

As discussed under modulated below Decal textures should be alpha textures instead of modulated ones. Having an alpha value for blending also allows the decal to further blend in other properties. A blunt explosion sprite should reduce the specularity of the affected area, a toxic waste splash could also have a glow map to have the toxic waste illuminate itsself and so on.

Rendering the Decal as part of the surface has the advantage that you never render the decal outside of the surface and if you also set a transparent border color and using GL_CLAMP_TO_BORDER for texture wrapping you also don't need to worry about the outside area of the decal texture.

The last approach can also be used for the classic decal rendering approach to safe a lot of CPU time as you can just use the bsp surface as the geometry to render, though you need to calculate texture coordinates for each of those. Mabye using DrawComplexSurface in this case (with texture coordinate calculation implemented in shaders) you can save a whole lot of time on calculations and especially avoid the expansive surface -> gouraud -> surface program roundtrips! Maybe even simply reuse the structures set up and used to perform the prior surface draw?

Note that the overall memory footprint of all decals textures (even in case they do use full materials) is rather low, where as the count, especially with full materials goes soon up. With ARB_bindless_texture one can make all decal textures resident on the GPU, and one can directly access them with some surface key, which you would have anyway to fetch some other surface related properties directly out of some buffer. This removes *any* special handling for decals on the CPU side during rendering, which in turn allows to batch larger chunks of surfaces at once. This reduced effort is especially crucial for using some depth peeling approach for transparencies, as you draw multiple times and you always save the extra handling on the CPU side of decals. Neat!

Though obsolete it is worth to mention that in the classic Unreal Engine 1 forward rendering one can to some degree handle [url=http://coding.hanfling.de/decal_fog.png]fogged decals[/url]. I developed this technique a few months ago. In general it handles fog which is not that colorful gracefully, but once it has some more saturated color the dark areas tend towards this color. Nontheless this still better then not having fog on them, or having scaary decals like in UT. The issue with the saturation can probably be accounted for when switching from modulated decals to alphablended decals, as alphablending can deal with fog gracefully.
However one issue which cannot be solved this way is that multiple dark decals above each other still darken the result which is clearly visibile in the screenshot.

[In case someone asks: The CageLight has bUnlit set which also disables fog. The panel texture is unlit, but has fog now due to a simple trick: Fog for BSP surfaces requires a lightmap index, which is not generated for unlit surfaces. The trick is to temporary remove the PF_Unlit flag of the surface before light rebuild in Editor and restoring it afterwards. This can be done automated using an UEditorEngine subclass. It also requires that you honor PF_Unlit for surfaces to not render the lightmap.]



Two sided rendering with focus on Meshes
While two sided is no real concern on surfaces as you can just flip the surface normal and everything is fine, it's gets more complicated for Meshes where smoothing is performed and the normals won't just depend on the triangle itsself, but also on the surouding triangles.

While the current turn triangle to viewer, calculate face normals and average them to vertex normals sort of works with vertex only lighting it will misserable fail for per pixel lighting which depends even more on proper normals.

The best approach for solid/masked twosided would be to require that connected twosided faces face in the same direction. This way one could simply use the inverse of the interpolated vertex normal in case it is a backface. However one needs to make sure the meshes fullfill this requirement. This has also the advantage that smoothing does not depend on the view position.

For translucents twosided handling can actually get pretty interessting as one could model the influence of the light which hits the back especially if one considers that the rgb data stored in translucents determines the background blending (which by itsself basically indirect lighting). So why not having clouds look like [url=http://coding.hanfling.de/20160313_190418.jpg]that[/url] on their borders.

Another idea for two sided on Meshes would be to use the real face normal as outside and assume that the mesh has some volume. This could be beneficial for meshes rendered completely as translucent, like the nali ghosts.

Even another approach would be to say that for a full mesh with smoothing a normal flipping approach would be to problematic and assume that the front and backlight contribute in the same amount. This would allow really easy handling.

As translucent also implies diffuse, one could consider to not perform specular lighting at all on translucent.
Last edited by han on Wed Jun 22, 2016 10:39 am, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

Draw styles, texture and line flags

Post by han »

PF_Masked
Historical PF_Masked has two meanings. For a palettized texture it means: Set the first palette color to (0,0,0,0) and the other meaning beeing for rendering discard the fragment if the alpha value is below 0.5.

Apart from that the overall PF_Masked handling is a total clustercrappity smack. If precaching is enabled and a surface has the PF_Masked set, the PF_Masked flag is applied to the texture. Canvas drawing always adds the PF_Masked. And in Deus Ex if a texture has PF_Masked set this is also applied to the mesh face.

Currently the best way to deal with masked in the RenDev is to always honor whether the masked flag is set and treat the first palette index this way. This seems to always work right in Unreal, Nerf and Deus Ex, Botpack and probably all other games. However this way you end up loading the same texture twice and gives you that nasty dependency.

As I move in my codebase towards handling the texture with all subtextures (bump, macro, detail) and properties at once this special handling gets more and more nasty.

So my idea is to get all the masked handling straight. PF_Masked at rendering time just denotes if the fragment should get discarded based on it's alpha value, while either for texture just the masked setting on the texture is honored for unpacking it's data or, even better those textures get an own TextureFormat id which explicit says that the first palette entry should be set to (0,0,0,0). There should be just some very rare cases where the texture data is actually used one time as masked and another time as non masked, where it would be perfectly fine to have two distinct textures. But now for just a very few cases and not for a large amount of these textures.

Another thing one should keep in mind about masked textures is that they disable early fragment test. While this is currently not any performance concern, however this can become a concern for rendering a lot of shadow maps. Using another shader set for non masked textures is also an issue as switching between shaders is also expansive. But then again, if one can't use early fragment tests one could make the best of it and use a logarithmic z-buffer which would disable early fragment tests too.

One particular issue with masked textures is the usually rather odd outline and for many textures the darking too black on the edges. The darking issue can be solved by diving the fragments color value by it's alpha value in the shader, but thats not the best way to go. A better way is to use a texture format with an independent alpha channel. So the color will not be influenced by the masked part of the texture getting set to invisible black.

And if one has a non binary alpha channel one can start to use an antialiased alpha channel to smooth out the shape of a texture. One test I ran sometime last year can be seen [url=http://coding.hanfling.de/aa_testscreen.png]here[/url].

It's also worth to mention that you can get a perfect 45° angle when choosing a ((0,0.5),(0.5,1)) [url=http://www.wolframalpha.com/input/?i=plot+(1-x)*(1-y)*0.5+%2B+(1-x)*y*1+%2B+x*(1-y)*0+%2B+xy*0.5+from+(0,0)+to+(1,1)]pattern[/url].

I'm pretty sure using these techniques one can do a lot about the awful borders of the mountains used in the skyboxes in Unreal.

Another problem of the binary alpha channel is it's heavy influence on the lowever mipmaps. Currently some biasing towards opaque is used as can be seen [url=http://coding.hanfling.de/CoreTexMetal_Chainlink_01.png]here[/url], [url=http://coding.hanfling.de/CoreTexMetal_Metal_ClenChainlink_A.png]here[/url] and [url=http://coding.hanfling.de/DeusExDeco_Skins_Plant2Tex2.png]here[/url].

Even worse these mipmaps increase the already existing tendencity of masked texture to moire and aliasing, so a non binary alpha channel can help here too. For further reducing the moire/aliasing issue one should probably not use a box filter for the alpha channel but a filter with some more resonable spectral properties. Two other things to experiment with is whether one gets better results if one factors in a slight bias towards opacity or if one uses a pseudo random pattern for selecting the samples for the alpha channel.



PF_NoSmooth
NoSmooth chooses a nearest texture minification/magnification filter over a linear reconstruction filter. Same as for PF_Masked, during precache and if set on a surface this gets applied to the texture the surface uses. While it would be less of an issue to handle on a per draw call basis, this is plain not worth it. My stance is that for PF_NoSmooth always the setting on the texture itsself should be used. This is used so rarely that that the extra complexity to handle this is plain not worth it. As for masked, if you really need both behaviours, copy the texture. This is still faster then having to care for switching the filtering state.

In Deus Ex, the UI drawing but for the background snapshot and the drug effect seems to be always smoothed. Otherwise it looks like all other UI elements are drawn with PF_NoSmooth, though it isn't in general set for the textures themself. So PF_NoSmooth could just be set for the UI textures which would be drawn smoothed.



PF_Unlit / bUnlit
Unlit surfaces yield one of the most unpleasent visuals in these games. They usually either stick out or are done on a we keep most of the texture black basis which contrasts with other textures beeing used. Glowmaps are an easy way to achieve the same results while at the same time integrate in the lighting system so the surfaces won't stick out anymore. Also they can fairly easy be created while preserving the visual style of the original content.



PF_Highlighted / Additional STY_Highlighted
PF_Highlighted is premultiplied alphablending. Using premultiplied alpha blending has some inherent advantages over non-premultiplied alpha blending as pointed out here.

It is also worth to point out that this blending mode is very versatile to use as you can effectively put out an alpha value for the background and the color components to add. For non-premultiplied alphablending Src is multiplied with alpha during blending. While you can easily multiply with alpha inside the shader without any issues you can't divide by alpha inside the shaders without special precautions as you easily end up dividing by zero and easily get black faces.

It is also worth to mention that this blend mode offers the ability to resonable handle fog as part of single pass rendering per primitive.
Due to the ability to handle fog and and it's versatibility I'll translate all transparency rendering but modulated for canvas drawing to this blendmode. Details are illustrated below.



Additional PF_AlphaTexture / STY_AlphaTexture
Just some ordinate alpha blended format.



PF_Translucent / STY_Translucent
Historically Translucent together with Modulated have been the only ways to supply transparency in Unreal Engine 1. The reason why these have been chooses is based on their ability to work on the color components only and not require an additional alpha channel, which wouldn't have been possible to use with some decent quality using a palettized texture format or would have otherwise taken up to much space.

[url=http://www.kentie.net/article/multipass/index.htm][KEN][/url] gives an overview about the two classic ways of rendering translucents at the bottom of the page, so I won't recap them here. The fishy thing about both approaches is that the light which hits the surface influences the transparency of the surface itsself and thus results in a different visibilty of the background. Especially not that if no light is on a translucent it completely removes the background. The best example for this are some Unreal maps where this hid or darkened the stars.

I see them both as approximations to the following equation Epic plain choose due to limitations of the hardware of that time.

Code: Select all

Result = Diffuse.rgb*Light.rgb + (1-Diffuse.rgb)*Dest.rgb
So Light just interacts with the surface and not with the background anymore.

However implementing this equation would come at a very high cost. Either you implement it as multipass and need to render each mesh triangle on their own twice or you use dual source blending and you can't write to multiple color buffers. So either way you are crappity smacked.

So back to the equation. The only difference to pre multiplied alphablending is that we have Diffuse.rgb instead of some alpha channel for the background blending, so why not derive one?

Deriving the alpha channel basically breaks done to use some norm. One of the most natural choices would be to use the relative luminance or the maximum norm. The first one works well in [url=http://coding.hanfling.de/Slith_GammaCorrect_FastTranslucentApprox.jpg]Unreal[/url] while the second one works great for the effects in Nerf.

Translucent mirrored surfaces are a bit more special case. The biggest difference is that one deals with reflection and not transmission when it comes to blending in the background. Using 0.7 - 0.2*RelativeLuminance(Diffuse) yielded [url=http://coding.hanfling.de/Terraniux_0_70_minus_0_20_RelativeLuminance.jpg]good results[/url] for me. One could consider introducing an angular dependency using some Fresnel approximation.

Another thing to keep in mind that it makes a huge difference whether you derive the alpha channel using the linear data or the gamma compressed one. In fact it even works quite well for a lot of textures to treat them completely as linear data.

My impression is that it will probably work fairly well just using always the same norm to derive the alpha channel and that just having the ability to either take the raw texture data as sRGB or linear depending on the texture format of the texture.

For the best results, especially on the meshes where it doesn't depend on surface flags in a map, one should head for selecting the norm to derive the alpha channel as an offline step and just convert that texture in some premultiplied alpha format. For those used in the map one can simply derive the alpha channel offline and plain ignore it for translucent surfs, while using it on premultiplied alpha surfaces if one intends to change a map to make full use of it. I can see this happening as part of my build commandlet. So it's just a line in the *.upkg file to define which norm to use to derive an alpha channel.

For Meshes and Sprites one now also needs to factor in ScaleGlow on the alpha term as ScaleGlow was applied over the light term before. Otherwise ScaleGlow=0 would not result in no visibilty, but instead it would darken the background.

In the end by using this approach the overbrightening effect is gone while beeing still singlepass and getting an ordinary premultiplied alphablending where I can way easier deal with fog and I can employ order independent transparency approaches. Neat!



Additional STY_Add
While my change to remove the unaesthetic overbrightening effect on translucents is a noticable improvement in general, it also exposed certain cases where the overbrightening effect was the most important part of the visual appearance. Thus a dedicated rendering style exposing this feature was needed.
STY_Add is as simple as just adding the color to the background. It can be easily implemented by using PF_Highlighted with a zero alpha value.

One noticable example are coronas. Beeing translucent the changes to translucent rendering by basing the background alpha value solely on the diffuse map it introduced noticable darkening when coronas faded out. Even with just a quick, not yet fine tuned, change to account for switching to gamma correct rendering when deriving the corona color and using STY_Add I get good results for corona rendering. In fact just adding the color reflects the nature of a corona times better then the classic translucent approaches.

Other examples are the dispersion pistols effect and energy weapon projectiles in general.



PF_Modulated / STY_Modulated
Modulation is basically Src*Dst*2. The best way to think about this is to 'light' whats behind the modulated primitive with the light values taken out of modulated texture. So modulated primitives are no standalone object like a alpha blended primitive would be, but merely just require interaction with the (viewport) dependend background.
Note that a Src value of 0.5 yield Dst again (e.g. no effect), which is also the reason why the outside of blood decal textures, etc. fade to 50% grey, something most of you guys are aware of. Also note that Modulated texture are usually stored as linear data.

Modulated texture have one advantage when used for detail/macro textures. That the data actually gets multiplied results in black areas of the source texture actually staying black. When used for detail textures one should always render the detail texture regardless of the distance as especially in Deus Ex this resulted in some rather drastic brightness changes when coming close to a surface. However detail textures should always have mipmaps and even more important the lowest mipmap level should get 50% grey. In fact Deus Ex looks way better after one normalizes it's detail textures this way.

For all other cases modulated textures plain suck. You can't put them into any draw order independent handling, you can't handle fog gracefully with it and in allmost all cases they won't be needed anymore.

Decals can always be done using some alpha blended format. While the average black bullet hole or explosion leftovers won't profit much, other textures like blood spurts or bio rifle splats will profit as their texture data can properly be blended in instead of beeing heavily affected by the background texture. Probably a good share of them could be converted using some automated means, like clamping the texture data to [0.5,1] and mapping this to [0,1] and deriving some alpha value like it's done for translucents.

Especially on the Meshes in Deus Ex modulated is used to achieve some light in the dark effect on Robots, AlarmPanels, etc. etc.. This should by all means be replaced with glowmaps as they offer the same functionality.

While this removes nearly all modulated, there are still some rare cases which would be left, so there should be at least a somewhat working fallback approach. A rough idea I have which I called for myself "the diner sign approach", would be to derive a diffuse material, a lighting term and an alpha term to treat it as an ordinary alpha blended texture. So for the diffuse term my idea is to clamp the texture data to [0.5,1] and strech those data to [0,1]. So for the lighting term one should probably consider that only values above 0.5 added light, so using that range again probably makes most sense. For the alpha term, one should probably apply some norm like for translucents but on (2*|red-0.5|,2*|green-0.5|,2*|blue-0.5|). Not that the darking effect is caused by reducing the background blending.



PF_TwoSided
This flag indicates that the primitive is not backface culled. Usually render devices in Unreal Engine 1 perform no backface culling on their own, instead it is culled before inside scope of Render in software. Probably best approach is to ditch all CPU side backface culling and triangle flipping, and instead enable backface culling in OpenGL and reverse vertex order if needed in a geometry shader.
This flag has also implications for vertex normals on meshes, see below on requirements for Meshes.



LINE_DepthCued
The following implementation of the LINE_DepthCued flag proved resonable in the past. Movers, Active Brush are drawn on top in the 3D views and path display gets occluded by bsp geometry and gain some resonable display instead of a wild entaglement drawn on top.

Code: Select all

glDepthFunc( (LineFlags & LINE_DepthCued) ? GL_LEQUAL : GL_ALWAYS  );
glDepthMask( (LineFlags & LINE_DepthCued) ? GL_TRUE : GL_FALSE );
Not that some equivalent functionality in the PF_ set would be appreciated, like a PF_NonDepthCued flag, so the LINE_DepthCued flag could be mapped to PolyFlags and thus avoiding conflicts with PF_Occlude or a potential missing (or needless) reset of the depth function. But space inside the PF_ set is sparse, but that would be some hot candidate for it. This would also have some advantage for the 2D UI drawing phase. You won't need to do z-Buffer clearing just to draw a Tile over something.



Additional LINE_NoPointConvulse
I added this line flag to disable the expansive check in ortho views which draw a larger point instead of a line when the line (within some limit) would appear as a one pixel point in that view. For rendering meshes as wireframes one does not desire that they produce points, so this is just wasted time. Even with using this flag for mesh rendering and additional cutting the cost of that check in half it takes up to ~2ms in total for the ortho top view of the sunspire just to perform these checks (though my cpu is horrible slow), but that is still an awful lot left. One can probably get rid of a lot of those checks by overriding the DrawWireBackground() function in the UEditorEngine and also supplying the LINE_NoPointConvulse there. In addition one can pass all the wire background lines of the same color and having the same LineFlags in one batch.
Last edited by han on Fri Dec 09, 2016 10:05 am, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

Requirements for Resources

Post by han »

In general, the related issues and motivations behind these requirements were should be discussed above.

Textures
Textures should be in sRGB or linearized sRGB colorspace and contain proper mip maps. Masked textures should have an antialiased alphachannel for improved borders. PF_NoSmooth needs to be set properly be set per texture.



Meshes
For Meshes it breaks down to be in a form, so per vertex normals can be calculated properly. This requires that two sided faces which are connected and thus get smoothed need to have the same orientation. Another thing for non organic objects as crates is to consider that introducing bands near the corner can provide an overall flat look on the surface while still having smooth edges. For a discussion about this, see [url=http://www.oldunreal.com/cgi-bin/yabb2/YaBB.pl?num=1463427961/27#27]here[/url].



Maps
In general there are no changes to the maps required. However there are occasionally things which are likely to break (e.g. modulated ripple effect of the NyLeve waterfall) and things which will end up looking quite different.

This is especially true for skies which made use of missing light on a translucent surface making the underlying surface black (and thus occluding stars), but this in turn stops stars bleeding through clouds.
It is also worth to mention regarding the increased visibilty of the the stars of the background star textures: Back in the days a lower screen resolution caused a lower resolution mip map to be selected for rendering. As this was filtered down it offered a lower contrast compared to using a high screen resolution where almost always the highest resolution mip map will be used. CRTs also applied some slight spatial blur on the output and thus further reducing the contrast. The lower contrast in turn reduced the visibility of the stars further. However this can be addressed by either blurring the star textures, reducing brightness of them or even by removing some stars.



Defaultproperties
Mainting defaultproperties should be mostly about replacing some STY_Translucent for STY_Add, setting bUnlit for Sprite actors where an unlit behaviour is desired. Removing unlit for a couple of cases where it is not desired like the flare, gibs, etc..



Code
Code changes should be minimal apart from the required integration of an interface to query the portals to drawn on top and another interface which is able to properly que the first person weapon and item overlay. Currently I can mostly think of code which sets bUnlit and ridicioulous high scaleglow values which should probably get replacing with toggling between a texture with and without a glowmap in case of light switches or lamps.
Last edited by han on Thu Jun 09, 2016 6:09 pm, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

Miscellaneous

Post by han »

Ignoring Coronas for Editor selection
Render based approach: Check for Viewport->HitTesting and don't draw Coronas.
RenDev based Hack: Start with INT HitDepth=0 HitDepth++ in PushHit(), HitDepth-- in PopHit(). In DrawTile check for Viewport->HitTesting && HitDepth==1 && Viewport->Actor->RendMap
Last edited by han on Wed Jun 01, 2016 7:40 am, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

Unreal 227 Specific Issues and Notes

Post by han »

REN_LightingOnly
Pretty cool feature, totally worth to support. Will probably become an invaluable development aid for me.



ActorRenderColor / ActorGUnlitColor
As I need to support PF_Unlit and AmbientGlow, etc. they can just reuse the infrastructure and won't take up additional G-Buffer space.

However, they both act as individual (down)scaling factors for eachcolor channel, the first one for the lit and the second one for the unlit parts of the mesh. Smirftsch gave me the usage example of a mapper beeing able to just modify the color of a flag.

Imho using a downscaling approach yields no visually appealing results. An alternative approach of handling these would be to just add these colors as light. This would result in a self illumination in that particular color. Pretty much like a colored ScaleGlow feature. This roughly matches the intention behind this feature, while at the same time it also results in visual more appealing rendering and it even adds some pretty cool feature. How about adding some slightly devilish redish glow to some actors? As the default values for these are (0,0,0,0) this change will cause no issues.



Static Meshes
Essentially StaticMeshes share the same caveeats as Meshes and LodMeshes and thus require the same excessive per face/vertex processing, though this just needs to be done once and can be cached afterwards. I should work with Smirftsch on a solution which puts the StaticMeshes during loading into a more desirable format.



Projector decals
Supporting them on solid geometry shouldn't be that much of an issue unless they need to be drawn right after an object, in this case this might noticable reduces performance. Supporting these on transparent geometry is a large issue. They might not work with a depth peeling approach and if they do they are likely to cause a severe performance hit. So reducing the number of peeling layers for 227 might be an option to counter this effect somewhat. If someone knows a way to draw them as part of a transparent primitive, tell me.
At least I won't need them for shadows in the end anymore...



Distance Fog
Unreal 227 offers linear, exp and exp2 distance fog, while at least Rune and Brother Bear use linear distance fog.

Switching to linear rendering heavily affects the apperance of distance fog. In the case of linear distance fog, bright fog on a dark level would be harder hitting while at the same time dark fog on a bright level would be less hard hitting. For exp and exp2 fog the non linearities are even more complicated.

I tend to take them by their name and implement them as what they claim to be, partially because I expect exp and exp2 to render noticable better results then in non linear rendering.

If this doesn't yield the desired results, I could assume that fog is usually brighter then the background and thus I could scale down the density for exp and exp2 fog, while for linear distance fog i could replace the linear rising fog contribution factor with a quadratic function.

An issue to solve is the combination of distance fog an volumetric fog. Take linear distance fog and suppose a volumetric light is placed before the start of the distance fog and another one after the distance fog is at it's maximum densitiy. So in the first case you would see the full volumetric light and in the second case you won't see it at all. Due to the non locality of the distance fog a depth sorting approach isn't feasable.
An ad initio approach would be to render the distance fog first. Scale down the volumetric light sources by the fog factor the distance fog does have at their origin. Render volumetric lights from back to front.

It is also worth to mention that in case one decides to drop support for volumetric fog, in favor of distance fog only, handling of transparencies and fog becomes by far easier and faster to render as the fog contributions just depends on the depth coordinates. In case of linear distance fog one can even especially make use of the linear nature of the fog contribution term for some very efficient approaches.
Last edited by han on Fri Jun 03, 2016 8:45 am, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

FAQ / Glossary / References

Post by han »

FAQ
n/a

Glossary
n/a

References
n/a
Last edited by han on Tue May 31, 2016 1:25 am, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
User avatar
han
Global Moderator
Posts: 686
Joined: Wed Dec 10, 2014 12:38 am

- Blank -

Post by han »

- THIS POST IS INTENTIONALLY LEFT BLANK -
Last edited by han on Tue May 31, 2016 5:57 pm, edited 1 time in total.
HX on Mod DB. Revision on Steam. Löffels on Patreon.
Locked

Return to “OpenGL & D3D for Unreal & UnrealTournament”