In this section I will describe all finished filters in the program.
This filter is used to darken / lighten the images. It works in a bit unusual manner, because I’ve tried to avoid intensity value clipping in the program. The image is first converted to an YRGB format – a format very similar to YUV, and the filter only affects the Y channel. The filter is non - linear. As the second step in the filter (after the RGB → YRGB conversion) a lookup table is generated for the all 4095 luminance values (12 bit per channel is used). Then the lookup table is blured several times, to reduce the clipping artifacts, and finally the table is applyed to the Y channel as a transformation function. As the last step, the image gets converted back to RGB format. The brigtness filter also have a subfilter, “maximize luminance range”. This subfilter scans the Y channel after the brightness filter is applyed, and extends the present values range to 0 - 4095. Using this subfilter the darkest pixels are always black and the brightest pixels are always white. As a side - effect of this filter the contrast will be increased for too dark and too bright images. It can be used in a stand - alone mode too, and in this case it will act as an auto - levels filter.
This filter works in the RGB colorspace, and it works almost like the standard contrast filter in all other application, with a small difference that it also creates the contrast lookup tables, and blur them to reduce the clipping artifact. The filter works in 2 steps. First step is the creation and bluring of the lookup tables and the second step is applying the lookup tables to the entire image in the RGB format.
This filter works in the HSV colorspace and it affects the Hue and Saturation channels only. The precisity of the saturation values is 12 bits, while the hue is stored with 0.02 degrees precission. The hue is changed by adding the value specified by the user to every hue value in the image. The saturation filter works by multiplieing the value specified by the user to every single saturation value in the image, and dividing the result with 100, resulting in 0 - 200% saturation values. The 0% saturation results in a gray image, while the 200% saturation results in an over - saturated image. This filter has 2 subfilters:
This is the standard levels / gama filter. It can work on Luminance channel, all RGB channels, Red channel only, Green channel only and Blue channel only. The user can select the channel he wants to filter and the histogram for that channel will be shown. After that the user can select the input / output minimum, maximum and gama values, and to apply the wanted levels filter to the image. The filter works by creating a lookup table and applying it to the entire image.
The sharpen filter works in 3 modes: standard, horizontal and vertical. In all 3 modes the image is filtered by the corresponding blur filter, and the blured values are substracted from the original values resulting in a reversed blur filter. The user can specify the radius and the intensity of the filter. The radius is used in the bluring process, while the intensity is used as an alpha channel in the layer mixing phase. The 2 mixed layers are the original image and the filtered image. The intensity is the alpha channel of the filtered image.
There are 4 variants of the blur filter. They all have intensity and radius values set by the user. The intensity value is used in the same manner in all of the 4 variants, as an alpha channel for the filtered image, which is then mixed with the original image. The description of the filter variants follows:
The rotation is done in 2 phases:
There are 2 types of this transformation filter, horizontal and vertical mirror. The filter is very simple and it works by reversing the image along X or Y axis.
This filter pair is used to change the image resolution / ratio. The crop filter is very simple, it just copies the part of the image which is in the specified boundary to a temporary buffer and then resizes the buffer of the image to the size of the temporary buffer. After that the pixels are copied back to the image from the temporary buffer, and the temporary buffer is destroyed. After the crop effect, the image is prepared for resize. There are 3 methods of resize:
The purpose of this filter is to remove the red eye artifact present in some digital and analog cameras. It works in a semi - automatic manner, the users needs to click on the eye, and the filter will try to filter out the artifact. The filter selects a 4% x 4% block from the image, and resizes it to 15×15 block, making the filter independent from the original image resolution. Then the pixel selected by the user is used as a base color and nearby pixels are checked for similarity. If they are similar to the base color, they will be added to the mask as artifact pixels, they will be ignored otherwise. When this is completed, the mask is blured - to reduce the filter egdiness and then the mask get resized to 4% x 4% of original image resolution. Then the filter engine converts the masked pixels to non-red, by averaging G and B values, and placing the average value in R, G and B component, resulting in a gray shade near to normal human eye shade. The mask is also used to determine the strength of the effect for each pixel. As a result, the filter can’t filter out all images in one pass, in some cases the user will have to click 2-3 times on the image to compleatly remove the red eye artifact, but the filter works with all types of the artifact, from brightest to darkest and from near - gray to colored.
The noise removal filter works in 3 phases:
The equalize filter equalizes the values in the channels, resulting in the same number of pixels for every possible intensity value (0 to 4095). It works by sorting the values, and putting the order number as a new value for every pixel in the image. The user can specify to filter only luminance channel (in YUV mode) or all 3 of the RGB channels (in RGB mode). It is also possible to set the strength of the filter. The filter strength is used as an alpha channel, and the filtered image is mixed with the original.
This filter works by using discrete wavelet transform on the image, and then normalizing the high frequency values to -2048 → 2048 range. The user can set the strength of this filter.
The auto region equalize filter divides the image into 32×32 pixel squares, and scans each square for it’s maximum RGB component value. Creates a small image from the maximum values, blurs it (radius 1) and resizes it to the original image resolution. This image is then used to normalize every pixel of the original image, using the equation: I’ = I * 4095 / Max. The user can specify the strength of the filter. This filter can be used both as an autobrightness filter and to brighten the dark parts of the image without changing the bright parts.
This is the standard general convolution filter with 5×5 convolution matrix and 12 bit / component precission. It works by multiplying the image pixels with the matrix values, then calculating the sum of the multiplied values, dividing the sum by the specified divider and finally adding the specified value to this result. The values are clipped to 0 → 4095. The user can also save / load convolution matrixes. I’ve also included a few pre - created convolution matrix examples:
This effect works by creating a light and a glow texture based on the input values specified by the user, such as:
The filter also creates a bumpmap using image luminance multiplied by the bumpmap height value defined by the user. The bumpmap is then used to modify the light and glow texture coordinates during the main rendering loop. The main rendering loop calculates the texel coordinates, and multiplies image RGB values with the light texel and adds the glow texel value to the result.
This effect reduces the image resolution, sharpens the image and then resizes it to the original resolution using the smart inperpolation technique and sharpening in every step resulting in a painting like image.
The emboss effect blurs the image by the specified radius then substracts the pixel relative to the current pixel in one of the 4 possible directions, multiplies the result by the radius and then mixes the result back to the original image with the specified strength as an alpha value.
This effects works by sharpening the image to unreasonable extent and then diagonaly bluring the image using the specified radius. The resulting distorted image is then colorized to 300% blue, 100% green, 0% red and mixed with the uncolorized image.
The Diffuse Glow filter works by converting the image to black and white, adjusting it’s levels using the values specified as glow and clear ammount. The glow ammount is the input levels maximum value, while the clear ammount is the input levels minimum value. The resulting image is then blured and added to the original image resulting in a glowing image. The glowing image is then mixed with the original image using the strength value as a constant alpha.
This effect works by bluring the image with the specified radius, and then substracting the blured image value from the original image value. The absolute result of this substraction is used as a resulting pixel and is mixed with the original pixel using the strength value as a constant alpha channel.
This effect works on exported video frames only, and it searches for the motion between the frames, and then removes the motion effect resulting in 3 very similar frames. The 3 frames are then mixed with double importance of the current frame and single importance of the previous and next frames.
The retiming effect can be used to speed up or slow down the video and it works by predicting the non existing frames from the data taken from existing frames. This is an image sequence effect.
This is a rare effect in image sequence editing software in general and it works by combining motion estimation based deinterlace, crossfade between near frames, resizing the video to match the requested standard and finall interlacing the frames. hum
Unlike the deinterlace routines in other applications in this project I’ve used motion estimation to achieve deinterlacing. The image is separated in 2 images, based on the field settings and then the motion effect is removed from one of the images, and the images are then mixed toegether. This way the deinterlaced frames are sharp, and they don’t lose details.
Video stored on VHS tapes usually have a large ammount of noise on the chrominance channels. This effect removes that noise by cross mixing the chrominance of adjactent frames.
The following basic program features are completed:
In this section I will list all the filters in the program grouped in the same manner as they are in the program GUI.
The program has suffered 2 complete refactoring processes, but there should be no need for any other interventions, because the code is now compleatly object oriented, standardized, and ready for porting. All the OS dependent procedures are now stored in the VCL_Dependent object, so if someone wants to port this program, he / she will only need to change this one unit.
The program is functional and on public beta testing since 07. 08. 2005. The project is also renamed from “IMT” to “Final touch”.
These are the project statistics at the moment of this wiki update:
This project is developed under the GNU General Public Licence (GPL).