Digital Signal Processing

Scientific & Technical

Digital Signal Processing No. 3-2015

In the issue:

- remote sensing of the Earth

- interconnecting of the Earth images
- combined vision systems
- estimation of the noise influence on TV image
- structural image recovery
- automatic control of GEO-referencing
- field game episode detection
- video matting algorithms

- localization of the eye centers
- digital images compression

Fusion of hyperspectral images of the Earth, acquired in different spectral ranges
Eremeev V.V., doctor of engineering sciences, director of the RSREU research institute FOTON,
Makarenkov A.A., postgraduate, RSREU research institute FOTON
Egoshkin N.A., Candidate of engineering sciences, RSREU research institute FOTON

Keywords: hyperspectral imagery, fusion, spectral unmixing, multispectral images, spatial resolution increasing.

Hyperspectral imagery (HSI) is a new promising field of the Earth remote sensing. It is based on a principle of splitting of the radiant energy reflected from the Earth surface into tens of hundreds of fluxes each of which corresponds to very narrow spectral bands. Then in every such band the image registration is performed and all of the acquired images forms the so called hypercube. Spectral resolution of the hyperspectrometer depends on the number of spectral bands and the higher the resolution the more precise spectrum is registered. But increase of bands count leads to splitting of the registered signal so it decreases dramatically as is signal-to-noise. Also the developments of the hyperspectral imagers have a trend to register imagery within the spectral range of 0,4 2,5 microns (visible, near IR and IR-ranges) with single device. But radiation in different spectral bands needs to be registered with some specific sensors. So, the solution has been developed in practice of the hyperspectrometer design when an fairly wide registered spectral range is divided into a number of partially overlapping ranges. Radiation from each range is registered independently by separate optoelectronic device consisting of an optical subassembly, dispersive element and optoelectronic converter (OEC). So that all OEC registers imagery of the same part of Earth surface but in different spectral ranges. For example, hyperspectrometers installed on the spacecraft Resource-P and EarthObserver-1 are designed according to this principle. As a result the imagery obtained by such instrument contains several hypercubes (each hypercube is registered in the specified spectral range) which differ in radiometric and geometric parameters and also by the spatial resolution.

In the processing of data from such systems the important stage is combination of all data into one hypercube covering all spectral ranges of initial hypercubes. The task of geometric and radiometric alignment of initial hypercubes is well known, so the present paper does not describe these issues.

The fusion of hypercubes obtained from various OEC to the image with single spatial resolution has not researched enough. So we propose a task to obtain a combined hypercube having a maximum possible resolution. Prerequisite of such task statement is a fact that a set of initial hypercubes has a hypercube with the best resolution and it can be used for calculation of the compensating filter.

The present paper suggests using a linear filtering of images in order to solve this task. The filter parameters are estimated by processing of images using two possible approaches: in the spatial area by means of the least squares method and in the spectral area on the basis of the ratio of results of the Fourier transform of the hyperspectral image channels.

As a result of experimental researches of the proposed approaches using HSI from the spacecraft Resource-P the following has been determined:
proposed algorithms allows the reduction of differences of image resolutions from OEC 1 and OEC 2 more than in 3 times;
spatial approach is less critical to HSI characteristics but it can be used only with small dimensions of the filter;
frequency approach does not limit filter dimensions but it imposes stricter requirements to statistical characteristics of HSI;
point spread function form of OEC 1 and OEC 2 of the spacecraft Resource-P No.1 and No.2 hyperspectral instruments allows solving a task of the inverse filtration without significant distortions.


1. Eremeev V.V. Current trends of analyzing and improving the quality of aero-space imagery for the Earth surface // Digital Signal Processing. 1. 2012. PP. 38 44.

2. Achmetov R.N., Stratilatov N.R., Yudakov A.A., Vezenov V.I., Eremeev V.V. Models of formation and some algorithms of hyperspectral image processing // Izvestiya, Atmospheric and Oceanic Physics, 2014, Vol. 50, No. 9, pp. 867877.

3. Lucas Parra, Clay Spence, Paul Sajda, Andreas Ziehe, Klaus-Robert Muller, Unmixing Hyperspectral Data, in Advances in Neural Information Processing 12 (Proc. NIPS*99). 2000. PP. 942-948.

4. J.J. Settle, Linear mixing and the estimation of end-members for the spectral decomposition of remotely sensed scenes, SPIE Remote Sensing for Geology, 2960. 1996. PP. 104-109.

5. Iordache, M.-D.; Bioucas-Dias, J.M.; Plaza, A., "Sparse Unmixing of Hyperspectral Data", Geoscience and Remote Sensing, IEEE Transactions on , vol.49, no.6. 2011. PP. 2014-2039.

6. .., Eremeev V.V., Makarenkov A.A., Moskvitin A.E. Specifics of analysis and processing of information from satellite hyperspectral Earth imaging systems // Digital Signal Processing. 4. 2010. PP. 38-43.

7. Eremeev V.V., Makarenkov A.A., Moskvitin A.E., Yudakov A.A. Improving Object Readability on Hyperspectral Imagery of the Earths Surface // Digital Signal Processing. 3. 2012. PP. 35 40.

8. Eremeev V.V. Current problems in the processing of remote sensing data // Radiotechnics. 3. 2012. PP. 54-64.

9. Eremeev V.V., Makarenkov A.A., Moskvitin A.E., .. Increase of imagery of Earth survey data informativity by fusion of hyperspectral information with data from different imaging systems // Digital Signal Processing. 4. 2013. . 37 41.

10. Yuhas, R.H., Goetz, A. F. H., and Boardman, J. W., "Discrimination among semiarid landscape endmembers using the spectral angle mapper (SAM) algorithm", In Summaries of the Third Annual JPL Airborne Geoscience Workshop, JPL Publication 92-14, vol. 1. 1992. PP.147-149.

Image processing algorithm for combined vision system of aircraft
B.A. Alpatov, M.D. Ershov, A.B. Feldman, e-mail:
Ryazan State Radio Engineering University ("RSREU"), Russia, Ryazan

Keywords: combined vision system, image registration, edge detection, fuzzy clusterization, geometric transformations, Fourier transform.


The paper is devoted to the actual problems arising in the development of combined vision systems (CVS) for aircraft. Special technical devices play an important role in aviation safety improvement. These devices can warn the aircraft crew about a possible collision, show him location of important landmarks: runways, rivers, roads and railways. Devices of this kind include onboard CVS. The CVS performs matching of real (from optical sensor) and synthetic (based on digital terrain map - DTM) images. The latter one is formed relying on the information about current aircraft position, which is measured with errors.

The algorithm of combined image synthesis includes the following steps:
1. Preprocessing of source real image and detection of topographic objects edges.
2. Generation of contour images based on DTM for different aircraft orientations. Measurement errors of navigation parameters must be taken into account on this step.
3. Matching of contour images for estimation of real and synthetic images transformation.
4. Formation of the combined image using parameters of geometric transformations.

The algorithm of edge detection in the real images is developed. This algorithm based on the fuzzy C-means clustering method and includes the following procedures: preprocessing, clusterization, morphological analysis of binary image, parametric analysis of detected contours. We use ROI (region of interest)-based image processing in this step. The size and position of the ROI can be defined in accordance with information about the object derived from the DTM.

Mismatch of images can occur due to factors such as errors in measuring the position of the aircraft as a material point in space (geographic coordinates X, Y and height H), and errors in measuring of angles of the aircraft's orientation relative to its center (yaw Cr, pitch Tn and roll Kr). Brute force search of orientation angles can be replaced by estimation of the Euclidean transformations, namely offset (α, β) and rotation angle φ. We propose to use the method based on properties of the Fourier transform for estimation of geometric transformation parameters of real and synthetic contour images.

Experimental research of the developed algorithm of combined image synthesis is performed using real videos obtained during the observation from the aircraft board. The proposed algorithm showed its efficiency. So for the whole image synthesis average time is reduced in 20 times in case of rough estimation of geometric transformation parameters. If the estimation accuracy close to the maximum available for the image resolution, the performance increases a thousand times in comparison with brute force search and a three times in comparison with the correlation approach.

1. Alpatov B., Babayan P., Khosenko M. Image Synthesis Using Searching and Tracking Techniques in Combined Vision Systems // Proceedings of 4th Mediterranean Conference on Embedded Computing (MECO). 2015. P. 147-150.

2. Mishin A.Y., Kiryushin E.Y. et al. Compact complex navigation system based on micromechanical sensors // Trudy MAI. 2013. 70. . 1-21.

3. Alpatov B.A., Strotov V.V. An estimation algorithm of the multispectral image geometric transformation parameters based on multiple reference area tracking // Proceedings of the SPIE, 2013. Vol. 8713. 8 p.

4. Szeliski R. Computer Vision: Algorithms and Applications. London: Springer-Verlag, 2011. 812 p.

5. Alpatov B.A., Babayan P.V., Balashov O.E., Stepashkin A.I. Methods of automated object detection and tracking. Image processing and control. M.: Radiotechnika, 2008. 176 .

6. Visilter Y.V., Zheltov S.Y., Bondarenko A.V. et al. Image processing and analysis in machine vision. Lectures and practical exercises. M.: Fizmatkniga, 2010. 672 .

7. Dunn J.C. A Fuzzy Relative of the ISODATA Process and Its Use in Detecting Compact Well-Separated Clusters // Journal of Cybernetics. 1973. Vol. 3. P. 32-57.

8. MacQueen J.B. Some Methods for Classification and Analysis of Multivariate Observations // Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability. 1967. Vol. 1. P. 281-297.

9. Cormen T.H., Leiserson C.E., Rivest R.L., Stein C. Introduction to Algorithms, 2nd ed. MIT Press, 2001. 1292 p.


Balashov O.E., e-mail:

Keywords: obstacle detection, time to collision, measurement of distance to objects.

The problem of detection of ground obstacle at low-altitude flight is described in this paper. It presents an algorithm to measure the distance between the aircraft and the obstacle using one video sensor. Obstacle detection is based on the analyses of coordinates of feature points at the image sequence. Points lying above the earth's surface considered to be an obstacle and calculate the times to collision to them.

Flying at low altitude are extremely dangerous because it often occur collisions with obstacles (bridges, high-rise building, pillar). To reduce the number of collisions is necessary to use assistance systems that warn of approaching to high-rise building. Collision avoidance systems may use different types of sensors that give information about the surrounding area. The paper describes the algorithm for detection of ground high-rise structures. For information around the aircraft uses only one video sensor. Analyses of the images sequence and of the aircraft coordinates allow detect high-rise buildings. Next we estimate the distance to the obstacle and time to the collision. Information about time to the collision allow the system warn the pilot about the threat of a collision.

Obstacle detection and estimation of their parameters is made by analyzing in the video sequence of coordinates of points of observed objects. To compare the object points in a video sequence is necessary that their images had distinguishing features, points must be features. There are various detectors of feature points. In most cases, the image of obstacles have clear boundaries, therefore comprise corner points. Consequently, to detect obstacles we use corner detectors (Harris corner detector). Analyse of the points on the edges of the image allow to evaluate the geometrical parameters of the obstacles.

Video sensor must move in space to detect obstacles before aircraft. As a result, we obtain images of objects (obstacles) from different angles. Having coordinates of the aircraft in space and calculating the coordinates of feature points in the image sequence can find the coordinates of an object in space. Among all points by analyzing the coordinate of feature points seek the points that are located in the space above the surface of the Earth. These points may belong to an image of obstacles. Obtained the points are combined to the group of feature points. For each object are calculated coordinates, range, dimensions, height. Subjects with high altitude are considered obstacles. Knowing information about the altitude of the aircraft system decides about the danger to collisions. If the obstacle is a threat, then system informs the pilot of the approaching danger.

The developed algorithm calculates the distance to the obstacle, their height above the ground. The accuracy of the algorithm depends on the speed of the aircraft, the distance to obstacles.


1. Y.V. Vizilter, S.Y. Zeltov, A.V. Bondarenko, M.V. Ososkov, A.V. Morzhin, Image processing and analysis tasks in machine vision: Lectures and practical lessons, M.: Fizmatkniga, 2010.

2. B.A. Alpatov, P.V. Babayan, O.E. Balashov, A.I. Stepashkin, Methods for automatic detection and tracking of objects. Image processing and control, M.: Radiotehnika, 2008.

3. O.E. Balashov, A.I. Stepashkin, Helmet-mounted review system and target designation ,// Journal Vestnik of Ryazan State University of Radio Engineering, vol. 4 (38), pp. 40-44, 2011.

4. B.A. Alpatov, O.E. Balashov, A.I. Stepashkin, Prediction of angular coordinates of moving objects in the on-board opto-mechanical systems, Information and Control Systems, vol. 5, pp. 2-7, 2011

5. B.A. Alpatov, O.E. Balashov, A.I. Stepashkin, D.V. Trofimov, Algorithm for measuring angular coordinates the line of sight of the operator, Information and Control Systems, vol. 3, pp. 18-21, 2012

6. B.A. Alpatov, O.E. Balashov, A.I. Stepashkin, D.V. Trofimov, The algorithm for computing the angular coordinates of the line of sight of the operator in helmet-mounted positioning system, Information and Control Systems, vol. 6, pp. 7-11, 2012

7. B.A. Alpatov, O.E. Balashov, A.I. Stepashkin, Problems of mathematical modeling and information processing in scientific research, Proceedings RSREU, pp. 16-25, 2003.

8. V.A. Besekersky, E.A. Fabrikant, Dynamic Synthesis of gyroscopic stabilization systems, St.P.: Shipbuilding, 1968.

9. V.N. Kalinin, V.I. Soloviev, Introduction to multivariate statistical analysis, M.:, 2003.

The estimation of the noise influence on the television image of the indoor positioning system

A.L. Tyukin, e-mail:
A.L. Priorov, e-mail:
P.G. Demidov Yaroslavl State University (YSU), Russia, Yaroslavl

Keywords: industrial television system, digital image processing, mobile robotic platform, indoor positioning, color landmarks, estimation of the noise influence.

The article considers indoor positioning system based on the digital image processing. Image is from an industrial television system.

Nowadays there are many indoor positioning systems, but there is not universal solution, as in global navigation satellite system (NAVSTAR, GLONASS, BeiDou, Galileo, etc.). The reason is the complexity of the accurate indoor positioning using radio channel. Therefore, this paper proposes the visible range of the electromagnetic spectrum for indoor orientation. A simple and inexpensive color camera is enough for work using this range.

In such system, the mobile robotic platform (MRP) orientates by special color-coded landmarks. Thus, MRP with installed camera can orients indoors. The landmarks are located indoor on fixed positions with a priory coordinates and with the known size. The advantage of such landmarks in comparison with other types is easy-to-manufacture, economical and do not require a power source, which allows them to remain operable for a long time.

The landmarks have following restrictions that define the algorithm working efficiency:
all three landmark colors should be visually identifiable;
the color areas centers shall be allocated in one line and equally-spaced from each other;
the landmarks surface should be matt.

The vertical landmark position is also preferable as the distance between color areas will not change while horizontal movement of the MRP.

The article describes technique of positioning system operation: the algorithm of color-coded landmarks recognition and the positioning algorithm. The article describes construction of color mask of television image.

Due to the possible impact of different types of noise on the positioning system, the research of the influence was provided. This research considers two methods of transformation from relative coordinate (the camera coordinate system) to absolute coordinate (plan of indoor space): the three-dimensional affine transformations usage (parallel transfer, turn, scaling) and gradient descent (to achieve a minimum standard deviation).

The studies found using the method of gradient descent the system is more stable. Besides, it shows low dispersion values of calculated coordinates in comparison with the results using the method of affine transformations.

During the studies, the system has shown the greatest resistance to the influence noise, when impact noise "salt and pepper".

1. Lashkari A.H., Parhizkar B. and others. WIFI-Based Indoor Positioning System // Computer and Network Technology (ICCNT). 2010 Second International Conference. 2325 April 2010. pp.7678.

2. Frost C. and others. Bluetooth Indoor Positioning System Using Fingerprinting. In: J. Del Ser, E. Axel Jorswieck, J. Miguez, M. Matinmikko, D. P. Palomar, S. Salcedo-Sanz, S. Gil-Lopez (eds.) Mobile Lightweight Wireless Systems. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, Springer Berlin Heidelberg, 2012. vol. 81. pp. 136150.

3. Schekotov M.S., Kashevnik A.M. Comparative analysis of the indoor positioning systems based on communication technologies, which supported smartphones // SPIIRAN. 2012. Vol. 23. pp. 459471.

4. Abdrahmanova A.M., Namiot D.E. Using a two-dimensional barcodes for crating indoor positioning and navigation system // Applied Informatics. 2013. 1. pp. 3139.

5. Kiy K.I., Smirnov A.M. Amur robot autonomous indoor navigation using color landmarks // Electronic Science and Technology Journal Technical vision. 2013. Vol. 2. pp. 3036.

6. Babayan P.V., Alpatov B.A. Allocation of moving objects in conditions of geometric distortion of the image // Digital signal processing. 2004. 4. pp. 914.

7. Belobryuhov M.S., Romanenko A.V. Computer vision system for cyber football // Reports of TUSUR. December. 2011. 2 (24). part 2. pp. 200203.

8. Dvorkovich V.P., Dvorkovich A.V. Metrological provision of video information systems. Moscow: Technosphera, 2015. 784 p.

9. Lebedev I.M., Tyukin A.L., Priorov A.L. The development and research of the indoor navigation system for a mobile robot with possibility of obstacle detection (in Russian) // Information-measuring and control systems. 2015. vol. 13, 1. pp. 5361.

10. Dzhakoniya V.E Television. // Moscow.: Goryachaya liniyaTelecom, 2002. 640 p.

11. Dvorkovich V.P., Dvorkovich A.V. Digital video information systems (theory and practice). Moscow: Technosphera, 2012. 1009 p.

12. Kostilov V.P., Slusar T.V., Sushij A.V., Chernenko V.V. About improve sensitivity silicon photoelectric // Applied electronics. 2012. Vol. 11, 3. pp. 440444.

13. Tyukin A., Lebedev I., Priorov A. The development and research of the indoor navigation system for a mobile robot with the possibility of obstacle detection // Open Innovations Association (FRUCT16). 2014. 16th Conference of, pp. 115122. 2731 Oct. 2014.

14. Tyukin A.L., Lebedev I.M., Priorov A.L. The development and estimate of the work quality of the television image processing algorithm for indoor positioning tasks // Nonlinear World. 2014. Vol. 12, 12. pp. 2630.

15. Priorov A., Tumanov K., Volokhov V. Efficient Denoising Algoritms for Intelligent Recognition Systems. In: Favorskaya M., Jain L.C. (eds.) Computer Vision in Control Systems 2, Intelligent Systems Reference Library, Vol. 75, Springer International Publishing, Switzerland, 2015. pp. 251276.

16. Shapiro L., Stockman G. Computer Vision // Prentice Hall, 2001, p. 617.

17. Babayan P.V., Alpatov B.A. Methods for image processing and analysis in airborne detection and tracking objects systems // Digital signal processing. 2006. 2. pp. 4551.

18. Gonzalez R., Woods R. Digital image processing. Moscow: Technosphera, 2005. 1104 p.

19. Apalkov I.V. Improved algorithms for removing noise from the image based on Modified criteria for evaluating quality // Abstract of the thesis of the candidate of technical sciences: 05.12.04 / I.V. Apalkov. Moscow, 2008. 24 .

20. Priorov A.L., Kuikin D.K., Khryaschev V.V. Detection and filtering impulse noise with random values of pulses // Digital signal processing. 2010. 1. pp. 18-22.

21. Volohov V.A. Suppression Gaussian noise in images based on principal component analysis and nonlocal processing // Abstract of the thesis of the candidate of technical sciences: 05.12.04 / V.A. Volohov. Vladimir, 2012. 19 p.

22. Tyukin A.L. The rate of digital television images for indoor positioning algorithm // Reports of Intern. Conf. Radioelectronic devices and systems for information and communication technologies-2015. 2015. pp. 300304.

Efficiency evaluation of color correction methods for panoramic images with small-size objects
Silvestrova O.V., e-mail:

color correction, structural similarity, color similarity, local color correction, global color correction, parametric methods.

One of the most common errors in processing of panoramic images is a difference in the intensity level and chromaticity of cross-linkable images that occurs due to different levels of exposure (illuminance), difference in visual angles and others. This issue is often solved by application of the methods of mixing or compensation in the overlap area. However, application of such methods can lead to loss of objects in detection it the task of tracking of small-size objects.

The paper has analyzed methods of color correction of panoramic images with small-size objects. Methods used for this task solution can be divided into parametric approaches with usage of models and nonparametric approaches without usage of models. Global methods operating in various color spaces (methods No.1 [7] and No.2 [13]) and also local methods (methods No.3 [11] and No.4 [10]) using various probabilistic characteristics have been chosen for analysis. Besides, method [14] based on the tensor voting had been analyzed but due to its low computational speed it was excluded from the following research. Sets of images have been created to evaluate an efficiency of methods. These images were synthetic with distinctive brightness characteristics at the boundary +6, +8, +10, +12 and natural. Also images in sets differ in scenes with various percentage of the sky area: 75%, 50%, 30%, 25%. Small-size objects (a plane) with different sizes in relation to area of the whole image were in the sets of images.

Criterion [15] containing two components: color similarity between the original image and transferred image and structural similarity between the resultant image and transferred image, was used as a criterion of efficiency in this paper.

Executed qualitative and quantitative analysis of the methods of color correlation of panoramic images with small-size objects has shown that local methods of color correction (No.3 and No.4) provide better results. In comparison with method No.1 operating in the uncorrelated LAB color space, method No.2 operates in the correlated space RGB that leads to degradation of the color correction effectiveness. Method No.3 cannot be recommended for images with small-size objects in spite of its high speed.

1. M. Brown and D.G. Lowe. Recognizing panoramas. In Proc. ICVV03, volume 2, pages 1218-1225, 2003.

2. M. Brown and D.G. Lowe. Automatic panoramic image stitching using invariant features. IJCV, 74(1):59-73, 2007.

3. .., .. // 20- - , , , 2012.

4. Xiang, B. Zou, and H. Li. Selective color transfer with multi-source images. Pattern Recognition Letters, 30(7):682-689, May 2009.

5. J. Yin and J.R. Cooperstock. Color correction methods with applications to digital projection environments. Journal of WSCG, vol. 12: 1-3, 2004.

6. W. Xu and J. Mulligan. Performance evaluation of color correction approaches for automatic multi-view image and video stitching. In IEEE Int. Conference on Computer Vision and Pattern Recognition, San Francisco, USA, 2010.

7. E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley. Color transfer between images. IEEE Computer Graphics and Applications, 21(5): 34-41, 2001.

8. G.Y. Tian, D. Gledhill, D. Taylor, and D. Clarke. Color correction for panoramic imaging. In Proc. 6th International Conference on Information Visualization, pages 483-488, 2002.

9. M. Zhang and N.D. Georganas. Fast color correction using principal regions mapping in different color spaces. Real-Time Imaging, 10(1): 23-30, 2004.

10. Y.-W. Tai, J. Jia, and C.-K. Tang. Local color transfer via probabilistic segmentation by expectation-maximization. In Proc. CVPR05, volume 1, pages 747-754, 2005.

11. Miguel Oliveira, Angel D. Sappa Unsupervised Local Color Correction for Coarsely Registered Images IEEE conference on Computer Vsion and Pattern Recognition: 201-208, 2011.

12. S.J. Kim and M. Pollefeys. Robust radiometric calibration and vignetting correction. IEEE TPAMI, 30(4):562-576, 2008.

13. X. Xiao and L. Ma. Color transfer in correlated color space. In Proc. 2006 ACM international conference on Virtual reality continuum and its applications, pages 305-309, 2006.

14. J. Jia and C.-K. Tang. Tensor voting for image correction by global and local intensity alignment. IEEE TPAMI, 27(1):36-50, 2005.

15. Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing 2004.

Stitching of remote sensing images from staggered TDI CCD
Kuznetcov A.E., Presniakov O.A., Myatov G.N.

Keywords: stitching, staggered TDI CCD, remote sensing.

Sensors with staggered TDI CCD are used in order to increase a satellite swath and spatial resolution. The paper is devoted to geometrical stitching of images obtained from CCD matrices of such sensors.

At first the paper considers factors leading to high complexity of adjacent CCD image displacements: high frequency disturbances of the satellite orientation and relief.

Known stitching methods divided into four groups have been analyzed:
- based on shift transformation;
- based on overlap determination using an image raster (image-space-oriented);
- regression method based on determination of single joining function parameters using tie points;
- based on high precision georeferencing.

Implementation of the stitching method proposed by authors that uses DEM is reviewed. This method is based on precise georeferencing with the rigorous sensor geometric model. Stitched image parameters are the same as parameters of the image obtained by a single virtual CCD-line with a swath equal to the total swath of all TDI CCD sensors. In this case it is possible to use a rough DEM because 3001000 m errors of the elevation depending on a scanner lead to the stitching error < 1 pixel. The mathematical model of stitching is given including three-dimensional piecewise linear approximation of the geometric correspondence functions for high-speed processing. The paper includes an example of the stitched image with mountain landscape and stitching accuracy estimation in this case.

In conclusion the precise stitching technology based on the stitching method proposed by authors is reviewed. The technology provides a geodetic orientation procedure to reach high stitching precision in the case of significant georeferencing errors. The proposed technology is successfully exploited in the Research Center for Earth Operative Monitoring of the Russian Federal Space Agency for processing of images obtained by spacecraft Resurs-P No.1 and No.2.

1. Baklanov A.I., Surveillance and monitoring systems: tutorial [in Russian], BKL Publishers, Moscow (2009). 234 p.

2. Tang, X.; Hu, F.; Wang, M.; Pan, J.; Jin, S.; Lu, G. Inner FoV Stitching of Spaceborne TDI CCD Images Based on Sensor Geometry and Projection Plane in Object Space. Remote Sens. 2014, 6, 6386-6406.

3. Weican Meng, Shulong Zhu, Baoshan Zhu, Shaojun Bian The research of TDI-CCDs imagery stitching using information mending algorithm. Proc. SPIE 8908, International Symposium on Photoelectronic Detection and Imaging 2013: Imaging Sensors and Applications, 89081C (August 21, 2013); doi:10.1117/12.2033285.

4. Modern technologies of Earth remote sensing data processing [in Russian]. Under the editorship of V.V. Eremeev Moscow: Fizmatlit, 2015, 460 p.

5. Kuznetcov P.K., Martemianov B.V., Skirmunt V.K., Semavin V.I. Method of high precision stitching of images, obtained by multimatrix pushbroom optic-electron converter [in Russian] // Bulletin of Samara State Technical University. Technical Sciences Series. 2011. 3 (32). P. 6981.

6. Gomozov O.A., Kuznetcov A.E., Los V.V., Presniakov O.A. Structure restoration of images, obtained by mulimatrix scanners. [in Russian] // Methods and devices of signals formation and processing in information systems. Interuniversity collection of scientific papers. Ryazan: RSREA, 2004. P. 8896.

7. Gomozov O.A., Eremeev V.V., Kuznetcov A.E., Los V.V., Presniakov O.A., Solovyova K.K. Algorithms and Technologies for Resurs-DK satellite imagery processing [in Russian]. Current problems in remote sensing of the Earth from space. 2008. Volume 5. 1. P. 6976.

8. Voronin E.G. Method and results of geometric stitching of optoelectronic satellite images [in Russian] // Systems of Earth observing, monitoring and remote sensing: Proceedings of scientific-technical conference. Moscow: Moscow Scientific and Technical Society of Radio Engineering, Electronics and Communication named after A.S. Popov, Filial of State Research and Production Space Rocket Center TsSKB-Progress Research and Production Enterprise OPTECS, 2013. P. 256266.

9. V. Eremeev, A. Kuznetcov, G. Myatov, O. Presnyakov, V. Poshekhonov, P. Svetelkin Image structure restoration from sputnik with multi-matrix scanners. Proc. SPIE 9244, Image and Signal Processing for Remote Sensing XX, 92440F (October 15, 2014); doi:10.1117/12.2066631

10. Eremeev V.V. Methods and information technologies of interbranch processing of multispectral satellite images [in Russian]: Dr. tech. sci. diss. Ryazan: RSREU, 1997. 312 p.

11. Shewchuk J. What is a good linear finite element? Interpolation, conditioning, anisotropy and quality measures, 2003, Technical report, CS, UC Berkeley.

Automatic georeference accuracy control technology based on reference pictures from the observation satellite Landsat-8
Kuznetcov .., Doctor of engineering sciences, deputy director of the RSREU research institute FOTON, Ryazan, e-mail:
Poshekhonov V.I., senior researcher of the RSREU research institute FOTON, Ryazan
Ryzhikov A.S., technician of the RSREU research institute FOTON, Ryazan

georeferencing precision, spatial database, corresponding points.

The paper represents a technology of automatic control precision of satellite image georeferencing not depending on external resources.

Images obtained from the spacecraft Landsat-8 and digital model of the SRTM relief were used as a source of reference data. Unfortunately, on the basis of separate images it is rather difficult to arrange a quick search of corresponding scenes on analyzable and reference images. So, a multiscale continuous bitmap coverage of the Earth surface reference has been developed by analogy with mapping services. For this purpose images obtained from the spacecraft Landsat-8 were transformed into the Mercator projection (WGS84).

The pyramid of layers has been designed to organize a multiscale representation of the continuous reference image. In this pyramid resolution of each following layer is half resolution of the previous layer. Pyramid layers are represented as images of very big size. So, a tile-based mechanism of the data organization is used for operation with such bitmaps. Only tiles required at the present moment are used to extract patches of the continuous image that allows avoiding excess requests to the disk storage, decreasing a number of cache-misses and increasing velocity of the reference bitmap formation.

The paper has analyzed possible solutions for search of corresponding points on images being heterogeneous in time of the significant (tens and hundreds of gigabytes) size.

Some authors use a method of the correlation-extremal identification of corresponding fragments. Besides, search of corresponding objects is carried out along the pyramid of images to take significant mutual coordinate mismatches into consideration, and images are preliminarily reduced to the contour form in order to exclude influence of the image scene texture and results of the correlation matching are analyzed by means of a group of statistical checks of the correlation function form.

Unfortunately, time expenditures of the suggested approach are enough high and in some cases an operator finds corresponding points faster than the procedure of automated search.

Method of the corresponding point search SURF is more effective in terms of multithreaded realization. It is based on extraction of compact descriptions of distinctive patches descriptors from comparing images and their following matching between each other. Search of blobs is carried out in the SURF algorithm to detect characteristic points because descriptors of such patch type can be matched with greater reliability.

Matching of all descriptors of one image with descriptors of another is carried out to determine corresponding points according to the original method. And proximity measure is a Euclidian distance calculated according to all components of descriptors. The paper suggests an optimized algorithm of the descriptor matching based on the preliminary decomposition of descriptors into groups where the Hessian matrix trace sign coincides among all elements and angle of the dominating direction differs not more than in 5o, with following matching of descriptors only of the same group.

The following results have been obtained according to results of the executed researches.

Reference bitmap information bank has been established in the territory of the Russian Federation and CIS countries. Bitmap coverage of this territory is organized in the form of a pyramid of different-scale layers with partition into tiles. Such representation of information allows realizing an access to any part of the coverage in real time.

High-performance and reliable mechanism for identification of coordinates of corresponding objects on the reference and analyzable images being heterogeneous in time and having texture differences and containing cloudy objects has been developed on the base of the SURF algorithm. Found coordinates of reference points and their heights are transmitted to the procedure for estimation of the georeferencing precision of imagery routes.

Developed technology of the automatic control of georeferencing has been adapted to information obtaining from the spacecraft Canopus-V and at present it is processed in the Research Center for Earth Operative Monitoring.

1. Chabita Devaraj, Chintan A. Automated geometric correction of multispectral images from High Resolution CCD Camera (HRCC) on-board CBERS-2 and CBERS-2B// ISPRS Journal of Photogrammetry and Remote Sensing, 2014, 89.

2. Kuznetsov A.E., Svetelkin P.N. Forming color images from remote sensing data medium and high spatial resolution // Digital Signal Processing, 2009, 3, C. 36-40.

3. Rafael Gonzalez, Richard Woods., Digital Image Processing. Prentice Hall, 2000, 1072.

4. Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, "Speeded Up Robust Features", ETH Zurich, Katholieke Universiteit Leuven.

Field game episode detection in video sequences
X.Yu. Petrova, e-mail:
M.N. Rychagov, e-mail:
S.M. Sedunov, e-mail:

LLC Samsung R&D Institute Rus, Russia, Moscow

Keywords: real time video classification, detection of sport scenes, visual cues, multi-modal features, color features, texture features, directed acyclic graph.

Video scene classification is an emerging branch of image science with high prospects of commercialization. Considering example of field game episode detection in video stream, we prove feasibility of real time, frame-by frame video classification technology. Developed approach can be easily extended to detect other types of content. Field game episodes are discriminated from any other type of content, such as movies (including natural scenes), news, animations, rendered content, concerts, etc. The solution is based on visual cues which are described by 1-D and 2-D distributions of color, texture and multi-modal (based both on color texture) features. A set of visual cue detectors is designed in the form of heterogeneously directed acyclic graphs (DAG) with 1-D and 2-D thresholding, linear classifiers and Boolean functions in the nodes. Procedure of effective organization of supervised training of DAG-classifier on large training datasets is presented. Method of image synthesis from visual cues allowing to evaluate relevance of chosen video cues is proposed.

Developed method of real time video scene detection, discriminating field game episodes from other types of content has six important features: a) detection is performed on a frame-by-frame basis preserving temporal smoothness within the same video segment; b) detection is based on video cues understandable by human; c) four new types of color detectors producing human-like judgments are proposed: yellow, green, white and bright and saturated color; d) four types of low level statistical features are proposed: mean gradient in green areas, histogram compactness for luminance channel in green areas, average luminance in green areas, average value of blue channel for green areas; e) classifier has the form of directed acyclic graph with 1-D and 2-D thresholding functions, linear classifiers and logical functions in the nodes; f) new type of scene change detector based on k-means color segmentation is proposed.

Presented solution is tested on ∼ 25 hours of sport and non-sport content of different quality (SD/HD, compressed by MPEG2, H264 and XViD). The classifier provides frame-by-frame detection results and utilizes small number of arithmetic operations - summations and shifts, allowing cheap hardware implementation and application in adaptive video enhancement pipeline of TV receiver. Submitted approach clears the way for a whole set of advanced algorithms for visual recognition and object categorization. It can also be used as pre-classifier in video classification solution.


1. Yuan Y., Wan C., The application of edge feature in automatic sports genre classification. IEEE Conference on Cybernetics and Intelligent Systems, 2004, Vol. 2, pp. 1133 1136.

2. Wei G., Agnihotri L., Dimitrova N., Tv Program Classification Based On Face And Text Processing. IEEE International Conference on Multimedia and Expo, 2000. ICME 2000. Vol. 3, pp. 1345-1348.

3. Y. Liu; J.R. Kender, Video frame categorization using sort-merge feature selection Proceedings. Workshop on Motion and Video Computing, 2002. Volume , Issue , 5-6 Dec. 2002 pp: 72 77.

4. Takagi S. , Hattori S.M., Yokoyama, K.; Kodate, A.; Tominaga, Sports video categorizing method using camera motion parameters H. International Conference on Multimedia and Expo, 2003. ICME apos;03. Proceedings. 2003 Volume 2, Issue , 6-9 July 2003 Page(s): II - 461-4 vol.2

5. Statistical analyzing method of camera motion parameters for categorizing sports video Takagi, S.; Hattori, S.; Yokoyama, K.; Kodate, A.; Tominaga, H. International Conference on Visual Information Engineering, 2003. VIE 2003. 7-9 July 2003 pp. 222 225

6. Gillespie, W.J.; Nguyen, D.T., Classification of video shots using activity power flow First IEEE Consumer Communications and Networking Conference, 2004. Volume 5-8 Jan. 2004, Page(s): 336-340

7. Jaser, E. Kittler, J. Christmas, W., Hierarchical decision making scheme for sports video categorisation with temporal post-processing IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. 27 June-2 July 2004, Vol. 2, pp. II-908- 913

8. Brezeale D., Cook D.J., Using Closed Captions and Visual Features to Classify Movies by Genre, Poster session of the Seventh International Workshop on Multimedia Data Mining (MDM/KDD2006), 2006.

9. Liang Bai; Song-Yang Lao; Hu-Xiong Liao; Jian-Yun Chen, Audio Classification and Segmentation for Sports Video Structure Extraction using Support Vector Machine. International Conference on Machine Learning and Cybernetics, Aug. 2006 pp. 3303 3307

10. Jiang X., Sun T., Chen B., A Novel Video Content Classification Algorithm Based on Combined Visual Features Model. 2nd International Congress on Image and Signal Processing, 2009. CISP '09. 17-19 Oct. 2009, pp. 1-6.

11. Dinh P.Q., Dorai C.,Venkatesh S. Video genre categorization using audio wavelet coefficients. In 5th Asian Conference on Computer Vision, Melbourne, Australia, Jan 23-25 2002.

12. Roach M., Mason J., Classification of video genre using audio. Eurospeech, 4:26932696, 2001.

13. Subashini K., Palanivel S.,Ramalingam V., Audio-Video based Classification using SVM and AANN. International Journal of Computer Applications Volume 44 No.6, April 2012. Pp. 33-39

14. Huang H.Y., Shih W.S., Hsu W.H., A Film Classifier Based on Low-level Visual Features, Journal of Multimedia, vol. 3, no. 3,July 2008

15. Kittler J., Messer K., Christmas W., Levienaise-Obada, Kourbaroulis D. Generation of Semantic Cues for Sports Video Annotation. In ICIP, pp. 26-29, 2001.

16. Choros K., Pawlaczyk P., Content-Based Scene Detection and Analysis Method for Automatic Classification of TV Sports News. Rough Sets and Current Trends in Computing Lecture Notes in Computer Science Volume 6086, 2010, pp. 120 129

17. Ionescu B. E., Rasche C., Vertan C. , Lambert P., A Contour-Color-Action Approach to Automatic Classification of Several Common Video Genres. Adaptive Multimedia Retrieval. Context, Exploration, and Fusion. Lecture Notes in Computer Science Volume 6817, 2012, pp 74-88

18. Pass G., Zabih R., Miller J. Comparing images using color coherence vectors, In Proc. Proceedings of the fourth ACM international conference on Multimedia (1996)

19. Haralick R.M., Shanmugam K., Dinstein I. Textural features for image classification. IEEE Transactions on systems, man and cybernetics Vol. SMC03, No 6, Nov. 1973, pp. 610-621

20. Park J., Han S., An Y., Heuristic Features for Color Correlogram for Image Retrieval, proc. of the ICCSA'08. International Conference on Computational Sciences and Its Applications, pp. 9-13, 2008.

21. SEEMORE: combining color, shape, and texture histogramming in a neurally inspired approach to visual object recognition. (1997). Neural Computation, 9(4), 777-804. Retrieved from

22. Machajdik J., Hanbury A., Affective Image Classification using Features Inspired by Psychology and Art Theory Proceedings of the international conference on Multimedia, ACM New York, NY, USA 2010. pp. 83-92

23. Vaswani N., Chellappa R., Principal Components Null Space Analysis for Image and Video Classification IEEE Transactions on Image Processing, Vol. 15, no. 7, July 2006, pp. 1816-1830

24. Koskela M., Sjoberg M., Laaksonen J. mproving Automatic Video Retrieval with Semantic Concept Detection. Lecture Notes in Computer Science Volume 5575, 2009, pp 480-489

25. Truong B.T., Venkatesh S., Dprai C., Automatic Genre Identification for Content-Based Video Categorization. 15th International Conference on Pattern Recognition (ICPR'00) - Volume 4, p. 4230.

26. Li-Jia Li, Hao Su, Eric. P. Xing, Li Fei-Fei. Object Bank: A High-Level Image Representation for Scene Classification and Semantic Feature Sparsification. Proceedings of the Neural Information Processing Systems (NIPS). 2010.

27. Godbole S. Exploiting confusion matrices for automatic generation of topic hierarchies and scaling up multi-way classifiers. Indian Institute of Technology Bombay. Annual Progress Report. January 2002.

28. Gomez G., Sanchez M., Sucar L.E., On selecting an appropriate color space for skin detection, Springer-Verlag: Lecture Notes in Artificial Intelligence, vol. 2313, 2002, pp. 7079.

Methodology for objective video matting methods comparison
M. V. Erofeev, e-mail:
Y.A. Gitman, e-mail:
D.S. Vatolin, e-mail:

A.A. Fedorov, e-mail:
Lomonosov Moscow State University, Moscow, Russia

Keywords: video matting, trimap, objective quality estimation.

Formally, matting is a problem of decomposition of image into foreground image, background image and foreground transparency map. Until now there was only common method of image matting comparison not applicable to video matting comparison. Moreover, authors of video matting methods either do not perform any objective evaluation or compare their method to one or two competitors.

In this paper we propose video matting methods comparison technique by spatial error and temporal coherence. The expected value of error of matting result composite with random background is used as spatial error measure. And variance of per frame spatial error is used as temporal error measure. We show comparison results of 12 matting methods. Additionally, we show how matting methods performance is affected by trimaps unknown area width.

To carry out experiments described above we prepared a set of test video sequences with ground truth transparency maps. To get ground-truth for our test data set we employed chroma keying and the following stop-motion capture procedure. The object with semitransparent edges is placed on the platform in front of an LCD monitor. The object rotates in small, discrete steps along a predefined 3D trajectory, controlled by two servos connected to a computer. After each step the digital camera in front of the setup captures the motionless object against a set of background images. At the end of this process, the object is removed and the camera again captures all of the background images. We paid special attention to avoiding reflections of the background screen in the foreground object. These reflections can lead to false transparency that is especially noticeable in nontransparent regions. To reduce the amount of reflection we used checkerboard background images instead of solid colors, thereby adjusting the mean color of the screen to be the same for each background. The new stop-motion procedure enabled us to get transparency maps with quality sufficiently exceeding results of chroma keying and stop-motion technique used in [9].

The results of all experiments were published at to enable interactive analysis and addition of new methods.


1. Levin A., Lischinski D., Weiss Y. A closed-form solution to natural imagematting // IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 2008. Vol. 30, no. 2. P. 228242.

2. Poisson matting / Jian Sun,Jiaya Jia, Chi-Keung Tang, Heung-Yeung Shum // ACM Transactions onGraphics (ToG). 2004. Vol. 23, no. 3. P. 315321.

3. A bayesian approach to digital matting / Yung-Yu Chuang, Brian Curless,David H. Salesin, Richard Szeliski // Computer Vision Pattern Recognition(CVPR). Vol. 2. 2001. P. II264II271.

4. Bai X., Wang J., Simons D. Towards temporally-coherent video matting //International Conference on Computer Vision (ICCV). 2011. P. 6374.

5. Sindeev M., Konushin A., Rother C. Alpha-flow for video matting // AsianConference on Computer Vision (ACCV). 2013. P. 438452.

6. Temporally coherent and spatially accurate video matting / Ehsan Shahrian,Brian Price, Scott Cohen, Deepu Rajan // Computer Graphics Forum. 2014. Vol. 33, no. 2. P. 381390.

7. Video matting via opacity propagation / Zhen Tang, Zhenjiang Miao,Yanli Wan, Dianyong Zhang // The Visual Computer. 2012. Vol. 28,no. 1. P. 4761.

8. Choi I., Lee M., Tai Y.-W. Video matting using multi-frame nonlocalmatting laplacian // European Conference on Computer Vision (ECCV). 2012. P. 540553.

9. A perceptually motivated online benchmark for image matting /Christoph Rhemann, Carsten Rother, Jue Wang et al. // Computer VisionPattern Recognition (CVPR). 2009. P. 18261833.

10. Lee S.-Y., Yoon J.-C., Lee I.-K. Temporally coherent video matting //Graphical Models. 2010. Vol. 72, no. 3. P. 2533.

11. Spatio-temporally coherent interactive video object segmentation via efficientfiltering /Nicole Brosch, Asmaa Hosni, Christoph Rhemann, Margrit Gelautz //Pattern Recognition. Vol. 7476. 2012. P. 418427.

12. Video matting of complex scenes / Yung-Yu Chuang, Aseem Agarwala,Brian Curless et al. // ACM Transactions on Graphics (TOG). 2002. Vol. 21, no. 3. P. 243248.

13. Apostoloff N., Fitzgibbon A. Bayesian video matting using learnt imagepriors // Computer Vision Pattern Recognition (CVPR). Vol. 1. 2004. P. I407I414.

14. Corrigan D., Robinson S., Kokaram A. Video matting using motionextended grabcut // European Conference on Visual Media Production(CVMP). 2008. P. 33(1).

15. Video snapcut: Robust video object cutout using localized classifiers /Xue Bai, Jue Wang, David Simons, Guillermo Sapiro // ACM Transactionson Graphics (TOG). 2009. Vol. 28, no. 3. P. 70:170:11.

16. Mamrosenko K. A., Giatsintov A. M. Rear-projection Method in Visualization Subsystem of Training Simulation System // Software & Systems. 2014. 4.

17. Hollywood camera work. Accessed: 2015-01-03.

18. Keylight. Accessed:2015-01-03.


20. Wang J., Cohen M. F. Optimized color sampling for robust matting //Computer Vision Pattern Recognition (CVPR). 2007. P. 18.

21. Gastal E. S., Oliveira M. M. Shared sampling for real-time alphamatting // Computer Graphics Forum. 2010. Vol. 29, no. 2. P. 575584.

22. Zheng Y., Kambhamettu C. Learning based digital matting // InternationalConference on Computer Vision (ICCV). 2009. P. 889896.

23. Chen Q., Li D., Tang C.-K. KNN matting // IEEE Transactions on PatternAnalysis and Machine Intelligence (TPAMI). 2013. Vol. 35, no. 9. P. 21752188.

24. Lee P., Wu Y. Nonlocal matting // Computer Vision Pattern Recognition(CVPR). 2011. P. 21932200.

25. Improving image matting using comprehensive sampling sets / E. Shahrian,D. Rajan, B. Price, S. Cohen // Computer Vision Pattern Recognition(CVPR). 2013. P. 636643.

26. Johnson J., Rajan D., Cholakkal H. Sparse codes as alpha matte // BritishMachine Vision Conference (BMVA). Vol. 32. 2014. P. 245253.

27. Levin A., Rav Acha A., Lischinski D. Spectral matting // IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). 2008. Vol. 30, no. 10. P. 16991712.

28. Shahrian E., Rajan D. Weighted color and texture sample selection forimage matting // Computer Vision Pattern Recognition (CVPR). 2012. P. 718725.

Iterative Algorithm for the Eye Centers Localization on Facial Image
Khryashchev V., e-mail:
Priorov A., e-mail:
Nikitin A., e-mail:
Stepanova O., e-mail:
P.G. Demidov Yaroslavl State University (YSU), Russia, Yaroslavl

Keywords: face recognition, eye center localization, multi-block local binary pattern, machine learning.

The algorithms of digital image processing and computer vision play an important role in modern CCTV systems. They allow to control hundreds and thousands of video streams in real time. One of the most important ways of modernization of such systems is the solution to the problem of automatic object recognition. This is a necessary condition for the development and manufacture of systems that can intelligently evaluate the environment and perform necessary actions.

The problem of accurately determining the position of eyes in the face image (eye localization) is important for a wide range of modern computer vision systems. Such as determining the direction of view and the angle of rotation of the head relative to the camera, the analysis of facial expressions and so on. In addition, the localization of the eye successfully used as a preliminary stage in the face recognition task. In this case, the coordinates of the eyes center can help to properly normalize face image after its detection. Studies show that the accuracy of the localization of the eyes has a significant impact on the quality of the face recognition system.

Over the past three decades, many different approaches to solving the problem of localization of the eye have been proposed. However, despite significant progress in this area, it is worth noting that this problem is still far from being solved.

Most modern eye localization methods can be divided into three categories:
- methods based on the measurement of the complex eye parameters;
- methods of creating a statistical eye model based on the training;
- methods that use information about the spatial structure of the face.

Analysis of known eye localization algorithms shows that the existing techniques are error prone. Insufficient image quality or the presence of spectacles in the image leads to inaccurate localization.

In this paper we propose an iterative algorithm for the localization of the eye centers based on multi-block local binary patterns that adapts to the quality and complexity of the face image. For testing we used two well-known eye localization algorithms - gradient and Bayesian. Currently they are often used in practical applications, and demonstrate an acceptable accuracy of localization. Gradient algorithm of localization uses a priori information about the spatial structure of the face. Bayesian algorithm is based on statistical learning using available sample of eye images.

The proposed algorithm gives almost no rough localization error (err > 0.15). Eyes localization error exceeds 0.15 only for 1% of the images from the database FERET and 4% from the BioID database. Furthermore, this method exceed other algorithms almost an order of magnitude in terms of performance. This allows to localize eyes in the video stream in real time.


1. Dvorkovich V.P., Dvorkovich A.V. Digital video information systems (theory and practice) Moscow: Technosphera, 2012. 1009 p.

2. Forsythe D.A., Pons D. Computer vision. A modern approach // M.: Wilyams, 2004.

3. Szeliski R. Computer Vision: Algorithms and Applications Springer, 2010.

4. Alpatov B.A., Muravyev V.S., Strotov V.V., Feldman A.B. Research of efficiency of use of image analysis algorithms in the navigation of unmanned aerial vehicles // Digital Signal Processing. 2012. 3. S. 29-34.

5. Alpatov B.A., Babayan, A.V., Smirnov S.A., Maslennikov E.A. Preliminary estimation of spatial orientation of the object via outer contour descriptor // Digital Signal Processing. 2014. 3. S. 43-46.

6. Nikitin A.E., Khryashchev V.V., Priorov A.L., Matveev D.V. Development and analysis of face recognition algorithm based on quantized local patterns // Non-linear world. 2014. 8. S. 35-42.

7. Kriegman D., Yang M.H., Ahuja N. Detecting faces in images: A survey // IEEE Trans. on Pattern Analysis and Machine Intelligence. 2002. V. 24. 1. P. 3458.

8. Hjelmas E. Face detection: A Survey // Computer vision and image understanding. 2001. V. 83. 3. P. 236274.

9. Zhao W., Chellappa R., Phillips P., Rosenfeld A. Face recognition: A literature survey // ACM Computing Surveys (CSUR). 2003. V. 35, 4. P. 399458.

10. Marques J., Orlans N.M., Piszcz A.T. Effects of eye position on eigenface- based face recognition scoring // Technical Paper of the MITRE Corporation. October 2000. 7 p.

11. Nikitin A.E., Stepanova O.A., Studenova A.A., Khryashchev V.V Localization of the eye center in the images// 17th Int. Conf. "Digital signal processing and its application (DSPA-2015). Moscow, 2015. T. 2. P. 719-723.

12. Riopka T., Boult T. The eyes have it // Proc. of the ACM SIGMM Multimedia Biometrics Methods and Applications Workshop. 2003. P. 916.

13. Zhu Z., Fujimura K., Ji Q. Real-time eye detection and tracking under various light conditions // Proc. of the Symposium on Eye Tracking Research and Applications. 2002. V. 25. P. 139144.

14. Zhu Z., Ji Q., Robust real-time eye detection and tracking under variable lighting conditions and various face orientations // Computer Vision and Image Understanding. 2005. 98 (1). P. 124154.

15. Wang P., Green M., Ji Q., Wayman J. Automatic eye detection and its validation // Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 2005. V. 3. P. 164172.

16. Li G. An Efficient Face Normalization Algorithm Based on Eyes Detection // Proc. of the IEEE International Conference on Intelligent Robots and Systems. 2006. P. 38433848.

17. Song F., Tan X., Chen S., Zhou Z.H. A literature survey on robust and efficient eye localization in real-life scenarios // Pattern Recognition. 2013. V. 46(12). P. 31573173.

18. Efimov I.N. Local binary patterns of median pixel - effective informative signs of pattern recognition technology // Digital Signal Processing. 2015. 1. S. 61-65.

19. Zhang L., Chu R., Xiang S., Liao S., Li S.Z. Face Detection Based on Multi-Block LBP Representation // Advances in Biometrics, Lecture Notes in Computer Science. 2007. P. 1118.

20. Timm F., Barth E. Accurate Eye Centre Localisation by Means of Gradients // Proc. of the International Conference on Computer Theory and Applications (VISAPP), 2011. V. 1. P. 125130.

21. Everingham M.R., Zisserman A. Regression and classification approaches to eye localization in face images // IEEE International Conference on Automatic Face & Gesture Recognition. 2006. P. 441446.

22. BioID face database // URL:

23. Phillips P.J., Moon H., Rauss P.J., Rizvi S. The FERET evaluation methodology for face recognition algorithms // IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000. V. 22(10). P. 10901104.

Method of digital images compression without spectral conversions
.P. Petrov, e-mail:
N.L.Kharina, e-mail:
P.N.Sukhikh, e-mail:
Vyatka State University, Russia, Kirov

Keywords: digital image, image compression, prediction, two-dimensional Markov process, bit image

The article describes the method of digital images (DI) compression. A distinctive feature of this method is the lack of spectral transformations and computing arithmetic procedures. The method allows to carry out parallel processing mode as each color component of DI is divided into binary images (BI) and compression procedure is applied to all BI independently from each other. It allows to carry out compression of DI of any digit capacity (from 8 bits per pixel and above for panchromatic DI) and considerably reduce the processing time of DI at minimal energy costs. It is particularly important at compression of high resolution images in conditions of rigid restriction on power resources, for example in the Earth remote sensing systems.

The basis of the method is a prediction procedure. Each BI is approximated by a two-dimensional Markov chain with two states. This representation allows to implement the images statistical redundancy as much as possible. The prediction of BI elements is realized on the basis of the transitions probabilities matrix of two-dimensional Markov chain and only incorrectly predicted bits will be stored. Only logical comparison operations are used for the prediction implementation. The prediction procedure is most effective for high-order BI, which contain maximum redundancy. The presence of areas similar to white Gaussian noise (WGN) according to the statistical characteristics is typical of middle-order and low-order BI. Therefore, the removal of such areas for these BI takes place before the prediction procedure. These areas are filled up with samples WGN at restoration. Any coding can be carried out after the prediction procedure. The RLE and Huffman methods are used in this work.

The method effectiveness investigation was carried out by compressing the test panchromatic and color images by the known (JPEG, JPEG2000) and the proposed methods. The MSE, SSIM and speed of processing were used as parameters for an estimation of recovered images quality. The proposed method has a slight loss in the compression ratio but higher performance in comparison with analogs.


1. Gonzalez R., Woods R. Digital image processing // Prentice Hall, 2002.

2. Jähne B. Digital Image Processing: Concepts, Algorithms, and Scientific Applications // N.-Y.: Springer, 2005.

3. Dvorkovich V.P., Dvorkovich A.P. Digital video information systems (theory and practice) // oscow: Technosfera, 2012.

4. Dvorkovich V.P., Dvorkovich A.P. Metrological support of video information systems // oscow: Technosfera, 2015.

5. Petrov E.P., Medvedeva E.V., Kharina N.L. Mathematical model of digital halftone images of the Earth from space // II All-Russian scientific and technical conference Actual problems of the missile and space equipment - Samara, 2011, pp. 179-185.

6. Petrov E.P., Kharina N.L., Rzanikova E.D. Method of digital grayscale images compression on the basis of Markov chains with several states // international scientific and technical conference Digital processing of signals and its application - oscow, 2013, XV-1, pp. 132-135.

7. Petrov E.P., Kharina N.L., Rzanikova E.D. Method of digital grayscale images compression on the basis of Markov chains with several states // III All-Russian scientific and technical conference Actual problems of the missile and space equipment - Samara, 2013. pp.163-170.

Research of the correlation images combining algorithms in combined vision systems
S.I. Elesina, e-mail:
O.A. Lomteva, e-mail:
Ryazan State Radio Engineering University (RSREU), Russia, Ryazan

Keywords: global extremum, criterion function, search area, genetic algorithm, step-by-step scanning, images pyramid, combined vision systems, parameters of aircraft positioning, real image, virtual image.

Modern aircraft demands high requirements to provide accuracy of navigation and flight safety in any conditions of piloting. To solve these tasks combined vision systems representing combination of artificial vision and improved vision systems are used.

Combine vision systems execute correlation combining of two images: real (RI) and virtual (VI), viz. comparison functional, particularly, cross-correlation function of VI and RI is calculated and extremum of comparison functional received is determined.

There are many methods of global extremum searching: genetic algorithm (GA), step-by-step scanning and so on.

The aim of this work is research of global optimization methods used for correlation images combining and finding ways to increase the performance of images combining algorithm.

We have found out that the most effective global optimization method is genetic algorithm. This algorithm has definite advantage over other methods. In particular image combination runtime using GA is order of magnitude less than using step-by-step scanning method. Thus it is necessary to receive GA optimum parameters values while it being used in combined vision systems has been carried out.

The ideas of GA were borrowed from nature. They are based on genetic processes of human organisms: biological populations develop over several generations, obeying the laws of natural selection according to survival of the fittest principle.

To increase the performance of images combining algorithm we offer to use extended of virtual terrain model angles. They are supposed to decrease the computational complexity of the correlation combining images algorithm. This approach considerably reduces the number of virtual images generated for combining.


1. Kostyashkin L.N., Loginov A.A., Nikiforov M.B. Problematic aspects of combined vision system of aircraft // Izvestiya SFedU. Engineering sciences. 2013, 5, pp. 61-65.

2. Loginov A.A., Muratov E.R., Nikiforov M.B., Novikov A.I. Reducing the computational complexity of image registration in the aircraft technical vision systems // Dynamics of Complex Systems. 2015. Part 9, 1. PP. 33-40.

3. S. Elesina, O. Lomteva. Increase of image combination performance in combined vision systems using genetic algorithm // Proceedings of the 3rd Mediterranean Conference on Embedded Computing. Budva, Montenegro. 2014. PP. 158-161.

4. . Nikiforov, S.Elesina, A.Efimov, Criterial Functions Selection for Combined and Enhanced Synthetic Vision Systems of the Aircraft, Computer Science and Information Technologies: Materials of the VIIth International Scientificand Technical Conference CSIT 2013. Lviv: Publishing Lviv Polytechnic, 2013. PP. 56-58.

5. Elesina S.I., Kostyashkin L.N., Loginov A.A., Nikiforov M.B. Images combining in correlation-extremal navigation systems. Monography // Edited by Kostyashkin L.N., Nikiforov M.B. Moscow: Radiotekhnika, 2015. 208 p.

6. Vizilter Y.V., Zheltov S.Y., Bondarenko A.V., Ososkov M.V., Morzhin A.V. Image processing and analysis in problems of computer vision: Course of lectures and practical exercises. Moscow: Fizmatkniga, 2010. 672 p.

7. Tzoy Y.R., Spitzin V.G. Genetic algorithm: tutorial. Tomsk: publishing office of TPU, 2006. 146 p.

8. M. Mitchell, An Introduction to Genetic Algorithms, 5th ed. Cambridge, MA: MIT Press, 1999. PP.103-131.

9. Elesina S.I., Nikiforov M.B. Increase genetic algorithm performance // Information technologies: theoretical and applied scientific and technological magazine. 2012. 3. PP. 49-54.

If you have any question please write: