So I had some spare time to study things I've always wanted to.
Left: Default shader // Right: Blur shader
Note: All shader files (including the files with number suffix) must be downloaded together. The numbers represent the clipping panel count when using Soft Clip on UIPanel components. Refer to:http://www.tasharen.com/forum/index.php?topic=13985.0
You can customize the amount of blurring by editing the two fields below. iterations variable represents the radius. blurSize variable represents the scale (in UV coordinate) which the other pixel should be sampled from. It should be possible to expose these fields so you can mess around with them through Materials or C# scripts.
half blurSize =0.005; half iterations =4;
Limitations: Originally I wanted to implement a Gaussian Blur effect but it's quite complicated and expensive. So it ended up in a sort of "hack" by blurring the pixels in horizontal and vertical axis only. While it may look fine on low iteration count, it doesn't take the diagonal pixels in to calculation, making high radius blurs look unnatural. I am planning to make another workaround for this issue some time.
For the past couple of weeks, I have been trying to replicate the Photoshop blend modes in Unity. It is no easy task; despite the advances of modern graphics hardware, the blend unit still resists being programmable and will probably remain fixed for some time. SomeOpenGL ES extensions implement this functionality, but most hardware and APIs don’t. So what options do we have?
1) Backbuffer copy
A common approach is to copy the entire backbuffer before doing the blending. This is what Unity does. After that it’s trivial to implement any blending you want in shader code. The obvious problem with this approach is that you need to do a full backbuffer copy before you do the blending operation. There are certainly some possible optimizations like only copying what you need to a smaller texture of some sort, but it gets complicated once you have many objects using blend modes. You can also do just a single backbuffer copy and re-use it, but then you can’t stack different blended objects on top of each other. In Unity, this is done via aGrabPass. It is the approach used by theBlend Modesplugin.
2) Leveraging the Blend Unit
Modern GPUs have a little unit at the end of the graphics pipeline called the Output Merger. It’s the hardware responsible for getting the output of a pixel shader and blending it with the backbuffer. It’s not programmable, as to do so has quite a lot of complications (you can read about ithere) so current GPUs don’t have one.
The blend mode formulas were obtainedhereandhere. Use it as reference to compare it with what I provide. There are many other sources. One thing I’ve noticed is that provided formulas often neglect to mention that Photoshop actually uses modified formulas and clamps quantities in a different manner, especially when dealing with alpha.Gimp does the same. This is my experience recreating the Photoshop blend modes exclusively using a combination of blend unit and shaders. The first few blend modes are simple, but as we progress we’ll have to resort to more and more tricks to get what we want.
Two caveats before we start. First off, Photoshop blend modes do their blending in sRGB space, which means if you do them in linear space they will look wrong. Generally this isn’t a problem, but due to the amount of trickery we’ll be doing for these blend modes, many of the values need to go beyond the 0 – 1 range, which means we need an HDR buffer to do the calculations. Unity can do this by setting the camera to beHDRin the camera settings, and also settingGammafor the color space in the Player Settings. This is clearly undesirable if you do your lighting calculations in linear space. In a custom engine you would probably be able to set this up in a different manner (to allow for linear lighting).
If you want to try the code out while you read ahead, download it here.
[File]
A) Darken
Formula
min(SrcColor, DstColor)
Shader Output
color.rgb=lerp(float3(1,1,1),color.rgb,color.a);
Blend Unit
Min(SrcColor · One, DstColor · One)
As alpha approaches 0, we need to tend the minimum value to DstColor, by forcing SrcColor to be the maximum possible color float3(1, 1, 1)
B) Multiply
Formula
SrcColor · DstColor
Shader Output
color.rgb=color.rgb *color.a;
Blend Unit
SrcColor · DstColor+DstColor · OneMinusSrcAlpha
C) Color Burn
Formula
1 – (1 – DstColor) / SrcColor
Shader Output
color.rgb=1.0-(1.0/max(0.001,color.rgb *color.a+1.0-color.a));// max to avoid infinity
You can see discrepancies between the Photoshop and the Unity version in the alpha blending, especially at the edges.
H) Linear Dodge
Formula
SrcColor + DstColor
Shader Output
color.rgb=color.rgb;
Blend Unit
SrcColor · SrcAlpha+DstColor · One
This one also exhibits color “bleeding” at the edges. To be honest I prefer the one to the right just because it looks more “alive” than the other one. Same goes for Color Dodge. However this limits the 1-to-1 mapping to Photoshop/Gimp.
All of the previous blend modes have simple formulas and one way or another they can be implemented via a few instructions and the correct blending mode. However, some blend modes have conditional behavior or complex expressions (complex relative to the blend unit) that need a bit of re-thinking. Most of the blend modes that follow needed atwo-passapproach (using thePass syntax in your shader). Two-pass shaders in Unity have a limitation in that the two passes aren’t guaranteed to render one after the other for a given material. These blend modes rely on the previous pass, so you’ll get weird artifacts. If you have two overlapping sprites (as in a 2D game, such as our use case) the sorting will be undefined. The workaround around this is to move theOrder in Layerproperty to force them to sort properly.
How I ended up with Overlay requires an explanation. We take the original formula and approximate via a linear blend:
We simplify as much as we can and end up with this
The only way I found to get DstColor · DstColor is to isolate the term and do it in two passes, therefore we extract the same factor in both sides:
However this formula doesn’t take alpha into account. We still need to linearly interpolate this big formula with alpha, where an alpha of 0 should return Dst. Therefore
If we include the last term into the original formula, we can still do it in 2 passes. We need to be careful to clamp the alpha value with max(0.001, a) because we’re now potentially dividing by 0. The final formula is
For the Soft Light we apply a very similar reasoning to Overlay, which in the end leads us toPegtop’s formula. Both are different from Photoshop’s version in that they don’t have discontinuities. This one also has a darker fringe when alpha blending.
Hard Light has a very delicate hack that allows it to work and blend with alpha. In the first pass we divide by some magic number, only to multiply it back in the second pass! That’s because when alpha is 0 it needs to result in DstColor, but it was resulting in black.
[29/04/2019]Roman in the comments below reports that he couldn’t get Linear Light to work using the proposed method and found an alternative. His reasoning is that the output color becomes negative which gets clamped. I’m not sure what changed in Unity between when I did it and now but perhaps it relied on having an RGBA16F render target which may have changed since then to some other HDR format such as RG11B10F or RGB10A2 which do not support negative values. His alternative becomes (using RevSub as the blend op):
Unity 5.6’s release made some changes to the default sprite shader which make parts of the shader in the following post invalid. At the bottom of this post you’ll find an updated version of the demo project that works in Unity 5.6.
Unity provides a component to outline UI objects, but it doesn’t work on world space sprites. This post will demonstrate a simple way to add outlines to sprites using an extended version of the default sprite shader along with a simple component. This could be used to highlight sprites on mouse over, highlight items in the environment, or just to make sprites stand out from their surroundings.
To begin, create a new shader in your project called Sprite-Outline. This shader provides all the functionality of the default sprite shader, with the additions to allow sprite outlines.
Shader "Sprites/Outline"
{
Properties
{
[PerRendererData] _MainTex ("Sprite Texture", 2D) = "white" {}
_Color ("Tint", Color) = (1,1,1,1)
[MaterialToggle] PixelSnap ("Pixel snap", Float) = 0
// Add values to determine if outlining is enabled and outline color.
[PerRendererData] _Outline ("Outline", Float) = 0
[PerRendererData] _OutlineColor("Outline Color", Color) = (1,1,1,1)
}
SubShader
{
Tags
{
"Queue"="Transparent"
"IgnoreProjector"="True"
"RenderType"="Transparent"
"PreviewType"="Plane"
"CanUseSpriteAtlas"="True"
}
Cull Off
Lighting Off
ZWrite Off
Blend One OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile _ PIXELSNAP_ON
#pragma shader_feature ETC1_EXTERNAL_ALPHA
#include "UnityCG.cginc"
struct appdata_t
{
float4 vertex : POSITION;
float4 color : COLOR;
float2 texcoord : TEXCOORD0;
};
struct v2f
{
float4 vertex : SV_POSITION;
fixed4 color : COLOR;
float2 texcoord : TEXCOORD0;
};
fixed4 _Color;
float _Outline;
fixed4 _OutlineColor;
v2f vert(appdata_t IN)
{
v2f OUT;
OUT.vertex = mul(UNITY_MATRIX_MVP, IN.vertex);
OUT.texcoord = IN.texcoord;
OUT.color = IN.color * _Color;
#ifdef PIXELSNAP_ON
OUT.vertex = UnityPixelSnap (OUT.vertex);
#endif
return OUT;
}
sampler2D _MainTex;
sampler2D _AlphaTex;
float4 _MainTex_TexelSize;
fixed4 SampleSpriteTexture (float2 uv)
{
fixed4 color = tex2D (_MainTex, uv);
#if ETC1_EXTERNAL_ALPHA
// get the color from an external texture (usecase: Alpha support for ETC1 on android)
color.a = tex2D (_AlphaTex, uv).r;
#endif //ETC1_EXTERNAL_ALPHA
return color;
}
fixed4 frag(v2f IN) : SV_Target
{
fixed4 c = SampleSpriteTexture (IN.texcoord) * IN.color;
// If outline is enabled and there is a pixel, try to draw an outline.
if (_Outline > 0 && c.a != 0) {
// Get the neighbouring four pixels.
fixed4 pixelUp = tex2D(_MainTex, IN.texcoord + fixed2(0, _MainTex_TexelSize.y));
fixed4 pixelDown = tex2D(_MainTex, IN.texcoord - fixed2(0, _MainTex_TexelSize.y));
fixed4 pixelRight = tex2D(_MainTex, IN.texcoord + fixed2(_MainTex_TexelSize.x, 0));
fixed4 pixelLeft = tex2D(_MainTex, IN.texcoord - fixed2(_MainTex_TexelSize.x, 0));
// If one of the neighbouring pixels is invisible, we render an outline.
if (pixelUp.a * pixelDown.a * pixelRight.a * pixelLeft.a == 0) {
c.rgba = fixed4(1, 1, 1, 1) * _OutlineColor;
}
}
c.rgb *= c.a;
return c;
}
ENDCG
}
}
}
Now create a new material called SpriteOutline and assign the newly created shader to it in the inspector.
Next create a new C# script and name it SpriteOutline. This component is going to handle updating our material in the editor and at runtime to toggle the outline off or on and also change the color. This component can also be targetted in an animation to enable or disable outlines for specific animation frames or to change the outline color.
Now that the hard work is done, add a few sprites to your scene. Change the material field of the SpriteRenderer component to the SpriteOutline material created above. You’ll also want to add the SpriteOutline component to this game object to show a white outline by default. To hide the outline simply disable or remove the component.
With all that completed, you should now have a sprite with a white outline. In the inspector you can change the color to anything you’d like, independently from the SpriteRenderer color. The custom shader also maintains all existing functionality of the default sprite shader.
Please download the demo project and play around with it to get a better idea of how this technique looks and works. It contains a single scene with three examples of outlined sprites, one of which is animated.
Shaders can be complicated, but they are very powerful and make it quite easy to implement graphical features, even in 2D. If you have any further questions please feel free to message me on Twitter @RyanNielson or comment below.
Update (June 2, 2016)
Some people have been asking about how to change the thickness of the sprite outlines. Please download this new demo project with the changes to add this functionality. Changes were made to both the shader and component. Just adjust the outline size slider on the sprite outline component to change outline size. There is a limited outline size of 16 to prevent issues with shader for loop unrolling. It hasn’t been tested throughly, so your results may vary, but it’s probably a good place to start.
Update (April 7, 2017)
Unity 5.6 has been released, and along with that came some changes to the default sprite shader. Unfortunetely this seems to be causing issues with parts of the method used above. Please download this new demo project which changes the sprite outline shader to incorporate the 5.6 shader changes.
If you ever want to create cool effects like hot air waving or thick glass refraction or some underwater streams you will came to Detonator package and HeatDistort shader.But as for me it looks very complicated, so write my own and use it well with latest Unity 2D system.
NOTE: it works only with Unity Pro and Deffered lighting on. It works the same way how Detonator'a works. It tooks image and project it correctly on a plane with texture distortion.
We will split tutorial into 2 steps. 1.Explain shader 2. How to use it.
_Refraction -amount of distortion _DistortText - texture according to what we gonna distort our environment. You can use any colored texture. To make distortion. But in fact only Red and Green channels are working as distortion vector.
Tags { "RenderType"="Transparent" "Queue"="Overlay" } - We set Queue to Overlay because we want to render this effect after everything.
#pragma surface surf NoLighting #pragma vertex vert We used custom NoLigthing model and custom vertex shader to have deal with vertex color that used in particle system. There is a little chunk, cuz we write only Emission to have absolutely No Lighting shader.
float3 distort = tex2D(_DistortTex, IN.uv_DistortTex) * float3(IN.color.r,IN.color.g,IN.color.b ); float2 offset = distort * _Refraction * _GrabTexture_TexelSize.xy; IN.screenPos.xy = offset * IN.screenPos.z + IN.screenPos.xy; float4 refrColor = tex2Dproj(_GrabTexture, IN.screenPos); Here we read distort texture, calculate offset of the screen and project on the model correctly. That's it.
2. Create a simple material and apply that shader. Next if you use particle system apply particle render order script to that. Other way some object will be rendered after distortion so it will looks weird. You can use particle Color or Color Over LifeTime to make distortion more/less. I often setup it thats way:
to make distortion fade in and fade out. Thats probabbly all that you need to know about.
Know bugs:
1.
How to resolve: Add particle order script to PS and set order to bigger then foreground sprite. Effect is not working with orthographic camera
2. To much distortion on large distance to camera
How to resolve: add script that decreasing material property distortion with distance to camera.