Digital Signal Processing

Scientific & Technical

“Digital Signal Processing” No. 3-2014

In the issue:

- much components images;
- spatial increasing resolution;
- combining images;
- remote sensing of the Earth;
- object pose estimation;
- moving
object detection;
- distortion compensation;
- neural processing.

mage Resolution Enhancement Method
Drynkin Vladimir Nikolaevich, head of department, e-mail:
Tsareva Tatiana Igorevna, senior researcher, candidate of science, e-mail:

Keywords: image resolution, solid matrix photodetectors, diagonal sub-pixel shift, hexagonal discretization, three-dimensional interpolation low-pass filter.

This article addresses problems of image resolution enhancement through a sequence of adjacent frames.

Methods of camera resolution enhancement through the use of multiple sensors with electronic image stitching and methods based on sub-pixel shifting in the adjacent image frames of the same object are analyzed. The basic advantages and disadvantages of these methods are identified.

As an alternative, the authors propose a method for improving vertical and horizontal image resolution free from outlined deficiencies. The proposed method uses diagonal sub-pixel shift of the image in two adjacent frames with the addition of intermediate zero rows and columns. The resolution increase is provided by the use of three-dimensional spatio-temporal interpolation of these adjacent frames.

Considerable attention is paid to methods of frame formation with diagonal sub-pixel shift and to the synthesis of the three-dimensional interpolation filter. Taking into account the anisotropy of real images spectrum the passband of the three-dimensional spatio-temporal interpolation filter is defined in the form of an octahedron. The synthesized filter is a cascade connection of a three-dimensional, two-dimensional, and one-dimensional recursively-non-recursive blocks.

In conclusion the results of hardware-in-the-loop simulation using real images to demonstrate the possibility of image resolution enhancement using the proposed method are presented. Analysis of the results showed that, on average, the resolution is increased by 1.7 times.

The proposed method is suggested as a basis for the resolution enhancement of television, thermal and other types of video systems, including multispectral ones.


Guyot L., Ricodeau J., Rougeot H. Dispositif pour la production d'images televisees, a matrices a transfert de charges, et chaine de prise de vue comportant un tel dispositive. FR Patent ¹ 2476949 (A1), H04N3/1593, H04N3/28, H04N 5/32; H05G 1/60, 1/64. Prior. 22.02.1980. Publ. 28.08.1981.

2. Hoagland K.A. Charge-coupled device video-signal-generating system. US Patent ¹ 4038690, H04N 3/14, 358/213; 357/24; 357/30. Filed: Jan. 21, 1976. Publ. July 26, 1977.

3. Smelkov V.M. Method for improving the resolution of cameras for forensic Diagnostics. Available at: (accessed 03.04.2014).

4. Park S.Ch., Park M.K., Kang M.Gi. Super-resolution image reconstruction: a technical overview. Signal Processing Magazine. IEEE. 2003. May. Volume 20. Issue 3. Pp. 21–36.

5. Katsaggelos A., Molina R., Mateos J. Super Resolution of Images and Video. Synthesis Lectures on Image, Video and Multimedia Processing. Editor A.C. Bovik. Morgan&Claypool Publishers. 2007. 134 pp.

6. Vilenchik L.S., Kurkov I.N., Razin A.I., Rozval Ya.B. Method of forming high-definition television images in the conventional CCD camera and device for implementing the method RF Patent ¹ 2143789, H04N 5/335, 5/225. Prior. 23.01.1998. Publ. 27.12.1999.

7. Borodjansky A.A., Drynkin V.N. Vertical-temporal filtering in high definition television systems. Ryazan. 1986. 15 pp. Dep. in CNTI Informsvaz?. 24.03.87, ¹ 1068-sv.

8. Drynkin V.N., Falkov E.J., Tsareva T.I. Efficiency of aerospace onboard two-spectral band image generator system. Computer Vision, 2013. Edition 1(1). Pp. 60-66. Available at: (accessed 10.06.2014).

9. Drynkin V.N., Tsareva T.I. Video system image resolution method. Appl. ¹ 2014103333(005183). 03.02.2014. H04N 3/14, 5/335. Claimant – FGUP GosNIIAS.

10. Borodjansky A.A. Optimum discretization of moving images. Elektrosvyaz. 1983. ¹ 3. Pp. 35-39.

11. Borodjansky A.A. Hyper- triangular discretization of n-dimensional messages. Radiotekhnika. 1985. ¹ 4. Pp. 49-52.

12. Ben-Ezra M., Zomet A., Nayar S.K. Video Super-Resolution Using Controlled Subpixel Detector Shifts. IEEE Transactions on Pattern Analysis and Machine Intelligence. June 2005. Vol. 27. No. 6. Pp. 977-987.

13. Digital encoding of TV images. Ed. I.I. Tsukkerman. Radio i svaz?. 1981. 240 pp.

14. Drynkin V.N., Falkov E.J., Tsareva T.I. Composite image generation in two-spectral onboard airspace system. Proc. of Scientific-Technical Conf. Moscow, 14-16 March, 2012. Ed. R.R. Nazirov. Pp. 33-39.

15. Borodjansky A.A., Drynkin V.N. Synthesis of multidimensional recursively-nonrecursive filters. Radiotekhnika. 1986. ¹ 4. Pp. 47-51.

16. Drynkin V.N. Real-Time design of N-dimensional digital filters for image processing // Digital Photogrammetry and Remote Sensing’95; editor E.A. Fedosov. St. Petersburg, 1995. Pp. 240-249.

17. Albats M.E. Spravochnik po raschety filtrov i liniy zaderzki [Calculation of filters and delay lines reference book]. Moscow-Leningrad. Gosenergoizdat. 1963. 200 pp.

18. Borodjansky A.A., Drynkin V.N. Stability of multidimensional recursively-nonrecursive filters. Radiotekhnika. 1988. ¹ 3. Pp. 37-38.

19. Bondarenko A.V., Dokuchaev I.V., Drynkin V.N., Tsareva T.I., Bondarenko M.A. 3D filter hardware realization. Proc. of Scientific-Technical Conf. Moscow, 12-14 March, 2013. Ed. R.R. Nazirov. (Publ.)

20. Bondarenko A.V., Bondarenko M.A., Drynkin V.N., Dokuchaev I.V., Yadchuk K.A. Spatio-temporal filtering of moving images. Proc. of Scientific-Technical Conf. Moscow, 18-20 March, 2014. Ed. R.R. Nazirov. (Publ.)

reliminary Combination of Images and Methods to Evaluate Combination Quality
Anatoly I. Novikov, Associate Professor, Department of Higher Mathematics, Ryazan State Radio Engineering University, e-mail:
Aleksey I. Efimov, Master Student, Department of Electronic Computers, Ryazan State Radio Engineering University, e-mail:

Keywords: real and virtual images, objects contours, key points, correlation methods of combination, homography matrix, combination quality.


The necessity to solve the task to combine images and use the results received appears in many areas of science and technology. This includes mapping, remote Earth sensing, multispectral technical vision systems (TVS) of aircraft (AC), vision of robotic systems. Present research is oriented to be applied in multispectral TVS of AC.

One of the most important and complex problems solved by on-board calculator is the task to combine the real images received from sensors of different nature with the images being synthesized on the basis of digital terrain map.

Combination of real and synthesized images in on-board TVS is one of the most complicated tasks for several reasons. One of the main reasons can be the errors in determining current AC coordinates as a material point in airspace (latitude λ, longitude φ and altitude h), as well as errors in determining AC orientation as an extended object in space. These errors include the errors to measure parameters of yaw ψ, pitch θ and roll γ.

Correlation combination algorithms [1] are mostly known and frequently applied in image combinations. They give sufficiently good combination results [2]. But it should be noted that application of these methods in aircraft on-board CVS is unrealistic because of their high computational complexity excluding the possibility to solve the tasks in real time.

Sufficiently optimistic results are obtained using the method of image combination based on the search of some set of corresponding pairs of key points in images being compared and finding the conversion Η, leading one image to the plane of the other one. The method based on usage of homography matrix has sufficiently low computational complexity. Disadvantage of this method is strong dependency of image combination quality from the degree of successful choice of key point pair in combined images [4].

The authors of the article offer the approach allowing to realize pre-combination of contours using only one pair of key points. Precombination algorithm consists of several steps:
- approximation of main objects contours in real and synthesized images by polygons;
- determination of mutually unambiguous correspondence between angle points of polygons received on the previous step (search of key points pairs);
- the choice of main key point from the multitude of corresponding point pairs;
- implementing the chain of synthesized image shifts and turns up to the total combination of images by means of the method offered as well as full enumeration angles method in hexadimensional space of parameters.

1. Baklitsky V.K., Bochkarev A.M. Methods of signal filtration in correlative-extreme navigation systems. Ì.: Radio and communication, 1986. 1072 p. (in russian)

2. Yelesina S.I., Yefimov A.I. Selection of Criterion Functions for Enhanced and Combined Vision Systems// Tula State University Proceedings, technical sciences, issue 9, part 1. 2013. - pp. 229-236. (in russian)

3. Novikov A.I., Sablina V.A., Goryachev Ye.O. Application of contour analysis for image combination// Tula State University Proceedings, technical sciences, issue 9, part 1. - pp. 282-285. (in russian)

4. Novikov A.I., Sablina V.A., Nikiforov M.B., Loginov A.A. Contour Analysis and Image superimposition Task in Computer vision Systems//11th International Conference on Pattern Recognition and Image Analysis: New Information Technologies (PRIA– 11–2013). Samara, 2013. vol. 1. –PP. 282-285.

5. Chetverikov D., Zsolt Szabo. Simple and Efficient Algorithm for Detection of High Curvature Points in Planar Curves, Proc. 23rd Workshop of the Austrian Pattern Recognition Group, 1999. –PP. 175-184.

6. Nepomnyaschy P.V., Yurin D.V. The search of reference points on vector images by means of angle structures detection with the help of statistic hypothesis evaluation.// Graphicon 2002 proceedings, (in russian)

7. Novikov A.I. “Limits detection algorithms of effective signals”, RSREU Bulletin ¹ 2 (Issue 24). Ryazan, 2008, pp. 11-15. (in russian)

Processing for Depth Map Propagation
Sergey Matyunin, e-mail:
Dmitry Vatolin
, e-mail:

Keywords: depth map, video, image processing, optical flow, 3d video compression.

In the paper we consider 2D-to-3D video conversion problem. We develop a system for semi-automatic conversion of 2D video into 3D stereoscopic format. Traditional pipeline of conversion includes mainly manual depth map mark-up and automatic views generation. Instead of manual depth painting, we use only limited manually painted input for several frames in video and automatically propagate it on other frames.

Several methods are proposed for such task in literature [6, 8, 10]. Due to the nature of video sequences, good quality can only be achieved with consideration of objects’ motion. Depth map is assigned to corresponding objects and moves according to their displacements. The depth map alignment near object edges influences significantly on the appearance of annoying artifacts in converted stereo. At the same time, many video processing algorithms tend to fail in occlusion area which present in one frame and absent in another. Motion estimation algorithms rely on color consistency between frames of video. So usually they assign some irrelevant vectors there.

We propose a technique for occlusion processing, that relies on per-frame occlusion detection. Detected areas are accumulated and tracked according to objects’ motion, thus forming a confidence map for interpolated pixels. In our approach we launch motion aware interpolation from several key frames of the processed video sequence to build several depth map versions. After that the most reliable version according to the confidence map can be chosen for each pixel.

We evaluated several per-frame occlusion area detection methods and several strategies of reliability map usage. Three occlusion detectors were tested: approaches, based on motion compensation error, motion vectors consistency (LRC) and geometric approach [17]. The first criterion is based on fact, that in occlusions motion compensated difference between two adjacent frames is high. The second criterion checks the direction of motion vectors between two frames. In non-occluded areas they usually have the same magnitude and opposite directions. In occlusions this condition is violated. Geometric approach relies on the density of motion vectors ends, that is nonuniform in occluded regions. These detectors were evaluated according to quality of interpolated depth maps. LRC and geometric approaches demonstrate the best results. The former achieves a slightly better PSNR level in our test, while the latter produces fewer artifacts near the borders of objects.

We also proposed and compared several ways of using confidence metric. The simplest approach is averaging of several interpolated versions without regard to confidence metric. That gives poor performance. Weighted averaging has better results. We used weights that depend on the distance from the current frame to the corresponding key frame. It works better than naïve strategy, but fails in occlusions. Winner-takes-all strategy has better performance. In each pixel it selects the interpolated version with the best confidence. The best results were achieved with combined strategy. In each pixel it considers only interpolated depth versions that have near-the-best confidence values. These versions are averaged with weights inversely proportional to the distance to key frame. The latter proposed approach demonstrates more than 1 dB gain over Winner-takes-all approach.

The occlusion processing approach allowed increasing PSNR of the interpolated depth map by more than 16 dB. The best quantitative results were demonstrated by LRC method. The geometric occlusion detection approach had slightly worse results according to PSNR, however it added fewer artifacts, that have a negative influence on the quality of stereo video.

The proposed method was applied to the depth map compression algorithm from [14]. The correlation between 2D video and depth map helps to reconstruct a highly compressed depth map during decoding. Depth map is compressed with reduced spatial and temporal resolution. That leads to very high compression rates. On decoding stage the algorithm restores the original resolution using information from 2D video. Using occlusion processing improved the quality (PSNR) of the decoded depth map by up to 2 dB. RD-curves for two sequences from [16] are presented in Figure 8.

The proposed occlusion processing approach can be applied to other video processing algorithms, such as video segmentation, restoration, etc.


1. Blais F. Review of 20 years of range sensor development. // Journal of Electronic Imaging. — 2004. — Vol. 13, no. 1. — Pp. 231–243.

2. Ogale A. S., Aloimonos Y. Shape and the Stereo Correspondence Problem // International Journal of Computer Vision. — 2005. — Vol. 65, no. 3. — Pp. 147–162.

3. Zhuo S., Sim T. On the Recovery of Depth from a Single Defocused Image // Computer Analysis of Images and Patterns / Ed. by Xiaoyi Jiang, Nicolai Petkov. — Springer Berlin Heidelberg, 2009. — Vol. 5702 of Lecture Notes in Computer Science. — Pp. 889–897.

4. Battiato S., Curti S., La Cascia M. et al. Depth map generation by image classification. — 2004.

5. Saxena A., Ng A., Chung S. Learning Depth from Single Monocular Images // IEEE Neural Information Processing Systems. — 2005. — Vol. 18.

6. Li Z., Cao X., Dai Q. A novel method for 2D-to-3D video conversion using bi-directional motion estimation // 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). — 2012. — Pp. 1429–1432.

7. Rapid 2D-to-3D conversion / P. V. Harman, J. Flack, S. Fox, M. Dowley / Ed. by A. J. Woods, J. O. Merritt, S. A. Benton, M. T. Bolas. — Vol. 4660. — SPIE, 2002. — Pp. 78–86. PSI/4660/78/1.

8. Varekamp C., Barenbrug B. Improved depth propagation for 2D to 3D video conversion using key-frames // IET Conference Publications. — 2007. — Vol. 2007, no. CP534. — Pp. 29–29.

9. Practical temporal consistency for image-based graphics applications / M. Lang, O. Wang, T. Aydin et al. — Vol. 31. — New York, NY, USA: ACM, 2012. — Pp. 34:1–34:8.

10. Guttmann M., Wolf L., Cohen-Or D. Semi-automatic stereo extraction from video footage. // ICCV’09. — 2009. — Pp. 136–142.

11. Video stereolization: Combining motion analysis with user interaction / M. Liao, J. Gao, R. Yang, M. Gong // Visualization and Computer Graphics, IEEE Transactions on. — 2012. — Vol. 18, no. 7. — Pp. 1079–1088.

12. Choi J., Min D., Sohn K. 2D-plus-depth based resolution and frame-rate up-conversion technique for depth video // Consumer Electronics, IEEE Transactions on. — 2010. — November. — Vol. 56, no. 4. — Pp. 2489–2497.

13. De Silva D.V.S.X., Fernando W. A C, Yasakethu S. L P. Object based coding of the depth maps for 3D video coding // Consumer Electronics, IEEE Transactions on. — 2009. — August. — Vol. 55, no. 3. — Pp. 1699–1706.

14. Matyunin S., Vatolin D. 3D Video Compression Using Depth Map Propagation // Multimedia Communications, Services and Security / Ed. by A. Dziech, A. Czyzewski. — Springer Berlin Heidelberg, 2013. — Vol. 368 of Communications in Computer and Information Science. — Pp. 153–166.

15. Grishin S. V. Software system for frame rate conversion of digital video signals. PhD thesis. Lomonosov Moscow State University, Moscow 2009. (In Russian: Grishin S. V. Programmnaia sistema dlia preobrazovaniia chastoty` kadrov tcifrovy`kh video signalov: Dis… kand. fiz.-mat. nauk: 05.13.11 / MGU. - M., 2009.)

16. Consistent depth maps recovery from a video sequence / Guofeng Zhang, Jiaya Jia, Tien-Tsin Wong, Hujun Bao // Pattern Analysis and Machine Intelligence, IEEE Transactions on. — 2009. — Vol. 31, no. 6. — Pp. 974–988.

17. Ince S., Konrad J. Geometry-based estimation of occlusions from video frame pairs // Acoustics, Speech, and Signal Processing (ICASSP’05). IEEE International Conference on / IEEE. — Vol. 2. — 2005. — Pp. ii–933.

18. Telea A. An image inpainting technique based on the fast marching method // Journal of graphics tools. — 2004. — Vol. 9, no. 1. — Pp. 23–34.

19. Ayvaci A., Raptis M., Soatto S. Sparse Occlusion Detection with Optical Flow // International Journal of Computer Vision. — 2012. — Vol. 97, no. 3. — Pp. 322–338.

Objects' Hyperspectral Feature Identification Algorithms in the Earth Remote Sensing Tasks
L.A. Demidova, e-mail:
R.V.Tishkin, e-mail:
S.V.Trukhanov, e-mail:
Ryazan state radio engineering university, Branch of SC «SRC «Progress» – Special Design Bureau «Spectr» Russia, Ryazan

Keywords: identification algorithm, objects’ hyperspectral feature, evklid distance similarity measure, fuzzy similarity measure, angular similarity measure, fuzzy linear regression, consolidation.

The paper is devoted to creation and research of Earth surface objects’ hyperspectral features’ (HSF) identification algorithms with application of various reasonably chosen similarity measures, and also – in subsequent consolidation of the private identification results, received by means of the private identification algorithms.

The following algorithms are offered:
- the HSF identification algorithms on the base of classical (crisp) similarity measures, providing HSF identification by means of an Euclidean distance similarity measure and an angular similarity measure;
- the HSF identification algorithms on the base of fuzzy similarity measures, providing HSF identification by means of fuzzy similarity measures with application of fuzzy linear regression;
- the HSF identification algorithms on the base of ?the consolidation algorithm of private HSF identification results, providing the consolidating results’ association of private identification algorithms, based on the classical and fuzzy similarity measures, and allowing to increase the HSF identification accuracy.

During the analysis objects’ HSF, received from space images, are compared to HSF standard from the spectral libraries.

The experimental studies’ results confirm the expediency of further development of the offered approach to the solution of objects’ HSF identification problem, based on the consolidation of private identification results of objects’ HSF, received with application of private identification algorithms of objects’ HSF. Such approach allow to increase the reliability of classification decision.

Some examples of objects’ HSF identification with application of the offered algorithms are given in the paper.

Use of this approach will provide the decision of Earth surface objects’ identification by means of objects’ HSF analysis, allocated from the processed images of spacecraft «Resource-P», with the subsequent accumulation of standart characteristics in a database that, in turn, will provide creation of actual domestic spectral library of standards, which can be applied during the condition monitoring of agricultural grounds, forests, water resources, an ecological soils’ condition.

1. USGS Spectroscopy Lab.

2. Jet Propulsion Laboratory. ASTER Spectral Library. NASA.

3. Spectral libraries – data source on spectrs. GIS-LAB.

4. Program complex ENVI. Manual. Company «Sovzond». 2009.

5. Shovengerdt R. A. Remote sensing. Models and methods of images' processing. – M.: Technosphere. 2010. 560 p.

6. Chandra A.M., Kosh S. K. Remote sensing and geographical information systems. M.: Technosphere.2008. 312 p.

7. Ris U.G. Bases of remote sensing. M.: Technosphere. 2006. 336 p.

8. Pylkin A.N., Tishkin R. V., Trukhanov S. V. Tasks of DATA MINING and their decision in modern relational DBMS // Vestnik of Ryazan state radio engineering university. 2011. No. 4 (release 38). pp. 60-65.

9. Chubukova I.A. Data Mining. Bases of information technologies. Special courses. Binom publishing house. Laboratory of knowledge.2006. 384 p.

10. Yang C., Everitt J H.., Bradford J.M. Yield estimation from hyperspectral imagery using spectral angle mapper (SAM). American Society of Agricultural and Biological Engineers. Vol. 51(2): 729-737.

11. Van der Weken D., Nachtegael M., Kerre E.E. An overview of similarity measures for images // Proceedings of ICASSP 2002 (IEEE Int. Conf. Acoustics, Speech and Signal Processing). Orlando, USA. 2002. pp. 3317-3320.

12. Pylkin A.N., Tishkin R.V. Methods and algorithms of images' segmentation . M.: Hot line – Telecom.2010. 92 p.

13. Myatov G.N., Tishkin R.V., Ushenkin V.A., Yudakov A.A. Application of similarity fuzzy measures in the problem of images’ combination of Earth surface // Vestnik of Ryazan state radio engineering university. 2013. No. 2 (release 44). pp. 18-26.

14. Trukhanov S.V., Yudakov A.A. Database structure creation of intellectual data processing system of hyper spectral shooting. The certificate on the computer program state registration in Federal Service for Intellectual Property ¹2013611036 of 09.01.2013.

15. Trukhanov S.V., Yudakov A.A. Program of intellectual data processing of hyper spectral shooting. The certificate on the computer program state registration in Federal Service for Intellectual Property of property ¹2013610619 of 09.01.2013.

16. Kremer N. Sh., Putko B. A., Trishin I.M., Friedman M. N. Higher mathematics for economists: The textbook for higher education institutions. M.: UNITY. 2002. 471 p.

17. Demidova L.A., Myatov G. N. Approach to uniqueness assessment of piecewise and linear objects with use of fuzzy linear regression // Control system and information technologies. 2013. T. 51. No. 1. pp. 85-89.

18. Haekwan Lee, Hideo Tanaka Fuzzy approximations with non-symmetric fuzzy parameters in fuzzy regression analysis. Osaka prefecture University. Journal of the Operations Research Society of Japan. Vol.42, ¹. 1, March 1999.

19. Anufriyev E.I., Smirnov A.B., Smirnova E.N. MATLAB 7. SPb.:BHV-St. Petersburg, 2005. 1104 p.

20. Demidova L.A., Tishkin R. V., Yudakov A.A. Development of clustering algorithms’ ensemble on the base of similarity matrixes of clusters tags and algorithm of spectral factorization // Vestnik of Ryazan state radio engineering university. 2013. No. 4-1 (46). pp. 9-17

Increase of S
patial Resolution of the Earth Hyperspectral Imagery by Fusion with High Resolution Multispectral Images
Eremeev V.V., PhD, Ryazan State Radio Engineering University, e-mail:
Makarenkov A.A., postgraduate, Ryazan State Radio Engineering University
Moskvitin A.E., PhD, Ryazan State Radio Engineering University.

hyperspectral images, integration, spectral unmixing, multispectral images, increase of spatial resolution.

Satellite hyperspectral imagery has lower spatial resolution in comparison with multispectral images. It limits the area of their applications. The projection of one pixel of a hyperspectral image (HSI) on the Earth surface can often contain hundreds of pixels received by a multispectral sensor having higher spatial resolution [1, 2]. So a hyperspectral sensor registers an average spectral characteristic of a relatively big area of the Earth. This received spectral characteristic (SC) describes average properties of all SC of objects from this area (so called end-members), i.e. a “mixed” SC is formed. At the same time during the thematic processing it is necessary to know a spectrum of separate smaller objects not the mixture of their spectral characteristics.

Some papers [1 - 5] consider a task of spectral unmixing, i.e. estimating spectra of separate objects constituting a pixel of a hyperspectral image on the basis of data static processing. Paper [5] suggests an approach based on the usage of linear regression in order to search components of a mixed SC under the known set of reference (“clean”) spectral characteristics. Disadvantage of these approaches is a necessity to apply libraries of reference spectral characteristics. It requires high-precision cross calibration of video data and SC from libraries. Another disadvantage is a complexity of precise estimation of the endmember abundaces. At the same time this process can be done with higher precision using data of the synchronous multispectral imagery with high spatial resolution [6-9]. In this case materials of synchronous multispectral imagery are involved as reference information during HSI spectral unmixing. Above mentioned papers use the technology of pan-sharpening [1, 2, 6-9] and also the spectral unmixing technology with spectral libraries as reference information [3-5] but this article proposes another approach on how to solve the task of hyperspectral image spectral unmixing and spatial resolution increase. It is based on HSI spectral unmixing using multispectral images with higher spatial resolution as reference information.

The conventional task of spectral unmixing of hyperspectral image pixels is to find spectral characteristics (end-members) included in the analyzed pixel and their percent according to the area occupied in the pixel (abundances). In this case the algorithm of spectral unmixing utilizing multispectral imagery with high spatial resolution has 3 stages. At the first stage a new multispectral image is formed by averaging of hyperspectral image spectral channels. At the second stage pixels of image are compared with corresponding areas of the image with high resolution. If an area of the image which all pixels are close corresponds to the analyzed pixel mn of the image , then the analyzed pixel mn is considered “unmixed” where m and n - pixel coordinates. A spectral characteristic for the pixel m, n of the hyperspectral image is added to the list of reference characteristics S. Full hyperspectral image are processed in such a way. At the third stage duplicate spectral characteristics are removed from the list S. At the fourth stage spectral characteristics S are transformed to spectral resolution of the multispectral image. At the fifth stage reduced spectral characteristics are correlated with spectra of pixels of the high resolution image and a most similar reference spectrum is found for each point. So a list of base spectral characteristics and corresponding percent according to the occupied area in the pixel is formed for each pixel of the hyperspectral image.

Presence of multispectral data with high spatial resolution received synchronously with HSI allows expanding the task of spectral unmixing by searching the spatial location of end-members inside the pixel. In this case at the fifth stage of the above mentioned algorithm a close reference spectral characteristic received by a hyperspectrometer is found for each pixel of the high resolution image. Then its reduction to imagery conditions of an analyzed point of the high resolution image is executed so that the result of received spectral characteristic integration in ranges of a multispectral image gives the brightness value that is equal to a pixel of the multispectral image. As a result a new hyperspectral image is formed with spatial resolution of the high resolution image.

Evaluation of proposed algorithms was carried out in the following way. An airborne hyperspectral image with high spatial resolution was used as reference information. A low resolution hyperspectral image was formed by averaging and decimation of airborne HSI. A multispectral image was formed by combining spectral channels of the hyperspectral high resolution image. Increase of the image spatial resolution for low resolution HSI was carried out (a measure of the spectral angle was used as a measure of similarity [10]). Quality of the spectral unmixing algorithm was estimated by comparison of spectra of same points of reference high resolution HSI to resulting HSI that was formed via spectral unmixing with resolution increase. Comparison was performed using the measure of the spectral angle. Points where a spectral angle exceeded 5 degrees were considered as error ones. As a result it was determined on the basis of representative static material (under the difference of spatial resolution in 22 times and number of channels of a multispectral image equal to 4) that percent of error points was 4%, average value of this error of all points was 5.3 degree and a mean square deviation – 1.7 degree.

So an approach to the spectral unmixing of hyperspectral images utilizing the synchronously acquired high spatial resolution multispectral images as reference information was proposed. The algorithm of hyperspectral image spatial resolution increase was developed on the basis of spectral unmixing of HSI pixels. Quality of the algorithm of spectral unmixing and HSI spatial resolution increase was estimated on the basis of real and model information.

1. Eremeev V.V. “Contemporary fields of work on analysis and increase of quality of the Earth surface images”//Digital signal processing. No.1. 2012, PP. 38 – 44.

2. Akhmetov R.N., Stratilatov N.R., Yudakov A.A., Vezenov V.I., Eremeev V.V. “Model of formation and some algorithms of hyperspectral image processing”// Research of the Earth from space. No.5. PP. 17-28.

3. Lucas Parra, Clay Spence, Paul Sajda, Andreas Ziehe, Klaus-Robert Müller, “Unmixing Hyperspectral Data”, in Advances in Neural Information Processing 12 (Proc. NIPS*99). 2000. PP. 942-948.

4. J.J. Settle, “Linear mixing and the estimation of end-members for the spectral decomposition of remotely sensed scenes”, SPIE Remote Sensing for Geology, 2960. 1996. PP. 104-109.

5. Iordache, M.-D.; Bioucas-Dias, J.M.; Plaza, A., "Sparse Unmixing of Hyperspectral Data", Geoscience and Remote Sensing, IEEE Transactions on , vol.49, no.6. 2011. PP. 2014-2039.

6. Antonushkina S.V., Eremeev V.V., Makarenkov A.A., Moskvitin A.E. “Specifics of analysis and information processing from satellite hyperspectral Earth imaging systems”// Digital signal processing. No.4. 2010. PP.38-43.

7. Eremeev V.V., Makarenkov A.A., Moskvitin A.E. Yudakov A.A. “Objects readability improving on hyperspectral imagery of Earth surface”// Digital signal processing. No.3. 2012. PP. 35 – 40.

8. Eremeev V.V. “Contemporary issues on processing of data received from the Earth remote sensing systems”// Radio engineering. No. 3. 2012. PP. 54-64.

9. Eremeev V.V., Makarenkov A.A., Moskvitin A.E., Myatov G.N. “Increase of informational content of the Earth imagery material by fusion of hyperspectral information with data from other imagery systems”// Digital signal processing. No.1. 2012. PP. 38 – 44.

10. Yuhas, R.H., Goetz, A. F. H., and Boardman, J. W., "Discrimination among semiarid landscape endmembers using the spectral angle mapper (SAM) algorithm", In Summaries of the Third Annual JPL Airborne Geoscience Workshop, JPL Publication 92-14, vol. 1. 1992. PP.147-149.

Preliminary Pose Estimation Algorithm Based on Contour Descriptors
B.A. Alpatov, P.V. Babayan, E.A. Maslennikov, S.A. Smirnov
Ryazan State Radio Engineering University
, e-mail:

Keywords: 3-D pose estimation, contour descriptor, geosphere, Euler angles.

Image-based 3-D pose estimation is an actual problem of computer vision nowadays. The need for pose estimation occurs in such applications as control and navigation of mobile robots and UAVs, aircraft automated aerial refueling, spacecraft docking, etc.

Our earlier papers describe a problem solving approach that consists of 2 stages: learning and estimation. The learning stage is devoted to the calculation of training image descriptors. The training images are gathered by capturing a 3-D model from viewpoints evenly distributed on a sphere. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. There are three pose estimation algorithms based on this approach which are different in descriptor calculation: FFT based descriptors, texture descriptors and the descriptors based on structural analysis. It should be noted that the proposed algorithms possess some disadvantages such as noise sensitivity and computational complexity.

In this work we offer a 3-D pose estimation algorithm based on contour descriptor calculation. Contour description has the following advantages:
- flexibility for object illumination changes;
- small size of descriptor;
- low computational complexity;
- high resistance to image noise.
The result of the proposed algorithm is a set of pose candidates that have the best matching criterion.

The Euler’s angles representation (α, β, γ) is used to describe 3-D poses in our task. We chose a convention where rotations are made in global frame around axes X, Y, Z respectively. It should be noted that the angle γ orresponds to camera rotation around its optical axis. In order to build a database of training images descriptors we generated a grid of uniformly distributed pair of angles (α, β) calculated using sampling algorithm of geosphere. Since the criterion function that is used to match descriptors is invariant to angle γ, this angle is omitted in the process of sampling.

On the estimation stage the calculated descriptor of observed image is matched with the training descriptors. The first m elements of criterion function values vector sorted in ascending order correspond to geosphere points {(α1, β1), ... , (αn, βn)} and camera rotation angles {γ1, ... , γn}, which form pose candidates set. As there are similar poses in the defined set, we need to split into n clusters, grouping poses by their nearness on the geosphere. The element representing a cluster will be considered the element of cluster that corresponds to the minimal value of criterion function.

The experimental research was made with 500 sample images, therefore it was found that by choosing m=5 we will always get a pose candidate with error value less than 10º.

1. Alpatov B.A., Babayan P.V., Balashov O.E., Stepashkin A.I. Methods of automated object detection and tracking. Image processing and control // Radiotechnika, 2008. – 176 p.

2. Bakhshiev A.V., Korban P.A., Kirpan N.A. Software package for determining the spatial orientation of objects by TV picture in the problem space docking // Robotics and technical cybernetics, 2013, No. 1 (1) – pp. 71-75

3. Babayan P.V., Maslennikov E.A. Image-based algorithms for 3-D pose estimation // Proceedings of 15th international conf. DSPA-2013, 2013, Vol. 2. – pp. 58-62

4. Alpatov B.A., Babayan P.V., Maslennikov E.A. Image-based algorithms for 3-D pose estimation in onboard surveillance systems // Bulletin of Ryazan State Radio Engineering University, 2013, No. 3. – pp. 3-8.

5. Bekir E. Introduction to Modern Navigation Systems // World Scientific Publishing Co. Pte. Ltd. 255 p., 2007.

6. Slabaugh, G. G. (1999). Computing Euler angles from a rotation matrix. Retrieved on August, 6, 2000.

7. Saff E., Kuijlaars A. “Distributing many points on a sphere” // The Mathematical Intelligencer, 1997, Vol. 19, No. 1. – pp. 5-11.

8. Schaeffer S. E. Graph clustering // Computer Science Review. – 2007. – Ò. 1. – ¹. 1. – Ñ. 27-64.

A method of
Moving Objects Detection in Video Stream and Estimation Accuracy of Objects Coordinates Determination
Medvedeva E.V., e-mail:
Karlushin K.A., e-mail:

Keywords: Moving objects detection, estimation accuracy of coordinates determination, images, multidimensional Markov chains.

The method of detection of moving objects in a video stream on practically stationary background is offered.

The method is based on the representation of a digital halftone images (DHI) sequence by a three-dimensional Markov chain with several states.

The DHIs presented by g-digit binary numbers are divided into g-digit binary images (DBI). Each DBI in the video sequence is three-dimensional Markov chain with two equiprobable states and horizontally transition probability matrix

, vertically transition probability matrix and time transition probability matrix .

To find moving objects contours the value of information in each DBI element is calculated in accordance with the states of neighboring elements :
where w(·) - transition probability density in Markov chains of different dimensions.

Then the calculated value of information is compared with a threshold to define whether the pixel belongs to the contour.

It is shown that the elements located in high bits of DHI possess the highest correlation. Therefore the main detailed areas may be found on DBI of the highest bits of DHI. The proposed method of moving objects contours definition requires small computational resources, as for each element only operations of comparison with three neighboring elements are carried out.

For the detection of the object of interest the image is divided into blocks and simultaneously with contour pixels definition these blocks are analyzed on existence of connected pixels in them.

The coordinates of the moving object correspond to the center of the rectangle, into which the object of interest fits.

The results of modeling of the developed method are shown. For the quality assessment of moving objects detection the probability of the object correct detection and root mean-squared error (RMSE) of object of interest coordinates determination were calculated.

For most video sequences the gain in RMSE of coordinates determination accuracy for developed method in comparison with a known subtraction method is 1,5x2,5.

The developed method requires small computational resources, thus it can be applied in real time data processing. The range of variation of object dimensions in video sequence can be wide, and the number of moving objects can be priori unknown.

1. Jahne B. (2005). Digital Image Processing: Concepts, Algorithms, and Scientific Applications. Springer-Verlag, Berlin Heidelberg.

2. Alpatov B.A., Babayan P.V., Balashov O.E., Stepashkin A.I. (2008). Methods of automatic detection and tracking of objects. Radiotechnica, Moscow. [in Russian]

3. Bogoslovskiy A. V., Bogoslovskiy E.A., Zhigulina I.V., Yakovlev V.A. (2013). Multidimensional signal processing. Linear multidimensional discrete signal processing. Methods of the analysis and synthesis. Radiotechnica, Moscow. [in Russian].

4. Trifonov A.P., Kucov R.V. (2011). Dynamic images processing. Detection and assessment of movement parameters. LAP LAMBER Academic Publishing, Germany.

5. Petrov E.P., Medvedeva E.V., Metelyov A.P. (2011) Method of synthesis of video images mathematical models based on multidimensional Markov chains. Nonlinear World 4: pp. 213-231 [in Russian].

6. Karlushin K.A., Medvedeva E.V. (2013). A method of moving objects detection in video sequences based on three-dimensional Markov chains. T-Comm. Telecommunications and transport 9: pp. 94-97 [in Russian].

7. Karlushin K.A., Medvedeva E.V. (2014). A method of moving objects detection based on spatio-temporal image model. Works of the 69th international conference devoted to Day of Radio «Radio electronic Devices and Systems for Info communication Technologies», Moscow. pp.136-140 [in Russian].

mproving the Efficiency of Object Descrimination in Video Tracking Systems in the Presence of Clutter
Muraviev V.S., Feldman A.B.
Ryazan State Radio Engineering University, email:

object, clutter, feature set, discrimination, tracking.

Video tracking techniques are widely used for traffic control, navigation, in video surveillance systems and military applications. Tracking is a complex process and generally include such steps as image preprocessing, detection, discrimination, measurement and prediction of the object position. Often objects observed in the presence of clutter, background occlusion and intersection of trajectories. Obviously the reliability of video tracking systems under complex conditions is highly desirable. In this article has been made the attempt to increase the stability of aircraft tracking in infrared image sequences.

It is assumed that objects are localized in the image using the known aircraft detection method, for example spatial background compensation method. One of the possible ways to achieve better results is to get a more relevant description to discriminate object from clutter with higher probability. In practice object area, dimensions, aspect ratio and average brightness can be considered as basic attributes. During studies five additional object characteristics have been suggested witch can be combined with specified attributes.

The first characteristic is the minimum value of SAD (sum of absolute differences) function, reflecting the similarity between the template and objects found in current frame. Next feature is based on the modified HOG-descriptor calculated for the object image. To extract the third feature a binary segment is divided into sectors of equal area and distribution of object points in a sector is counted. The normalized histograms are compared by calculating the Bhattacharyya coefficient. The contour curvature and representation of contour pixels in polar coordinates are used to describe the object shape.

Object discrimination algorithm based on the proposed feature set has been developed and modeled as a part of the video tracking system. The algorithm takes into account both the features values and their variations over time. Object usually behaves more stable and its characteristics can’t change rapidly witch is not true for the clutter.

To estimate the discrimination and tracking efficiency several quality metrics are computed. Results of the comparative experimental studies demonstrate the advantage of the developed algorithm. The suggested approach is not computationally intensive and suitable for use in relatively complex observation conditions.

1. Alpatov B.A, Blokhin A.N., Muraviev V.S. Image processing algorithm for automatic aerial object tracking systems // Digital Signal Processing.– 2010.– ¹4.– pp. 12-17 (in Russian).

2. Babayan P.V., Feldman À.B. Object localization in the image in technical vision systems of mobile robots // Bulletin of the State Radio Engineering University. – 2011. – ¹ 38. – pp. 19-25 (in Russian).

3. Alpatov B.A., Babayan P.V., Smirnov S.A. Automatic object tracking in the absence of a priori information about the observation conditions // Digital Signal Processing. – 2009. – ¹ 3. – pp. 52-56 (in Russian).

4. Labonte G., Deck W.C. Infrared target-flare discrimination using a ZISC hardware neural network // Journal of real-time image processing. – Vol. 5, Issue 1. – 2010. – PP. 11-32.

5. Viau C.R. Expendable Countermeasure effectiveness against imaging infrared guided threats // Second International Conference on Electronic Warfare (EWCI-2012), India, Bangalore, 2012.

6. Gray G.J., Aouf N., Richardson M.A., Butters B., Walmsley R. An intelligent tracking algorithm for an imaging infrared anti-ship missile // Proc of SPIE, Technologies for optical countermeasures IX. – 2012. – Vol. 8543, 85430L.

7. Lowe D.G. Distinctive image features from scale-invariant keypoints // International journal of computer vision. – 2004. – Vol. 60(2). – PP. 91-110.

8. Reyes-Aldasoro C.C., Bhalerao A. The Bhattacharyya space for feature selection and its application to texture segmentation // Pattern Recognition. – 2006. – Vol. 39, Issue 5. – PP. 812-826.

9. Visilter J.B., Zheltov S.J., Bondarenko A.V. et al. Image processing and analysis in machine vision. – Ì., 2010. – 672ñ. (in Russian).

10. Shapiro L.G., Stockman G.C. Computer vision, Moscow, 2006. – 752ñ. (in Russian).

11. A method of processing image sequence for detection and tracking of aerial targets. Patent of Russia ¹2419150 / Alpatov B.A., Babayan P.V., Kostyashkin L.N., Muraviev S.I., Muraviev V.S., Romanov J.N., Egel V.N.: applicant and patentee: Ryazan State Instrument Plant; priority 10.03.2010, published 20.05.2011, Bul. ¹14.

12. Bar-Shalom Y., Li X.-R. Kirubarajan T. Estimation with applications to tracking and navigation: theory, algorithms and software, New-York: Wiley, 2001. – 581p.

Simplified Distortion Compensation Algorithm for Projecting Video on Priori Unknown Form Aspherical Reflective Surfaces
I.S. Kholopov, Ryazan State Radio Engineering University ("RSREU"), Russia, Ryazan
, email:

Keywords: projection display system, distortion, predistortion, bilinear interpolation.

The article is devoted to compensate distortion of images displayed on the projection display systems (head-up-displays, HUD) optical combiner with priori unknown aspheric surface. Because use only optical devices not allow to eliminate distortion, then it is need the introduction of predistortion in the original image, so that the result projection image on combiner preserves straight lines, as well as the distances and angles between them.

In the paper is proposed to use a simplified algorithm for rapid distortion coefficients selection when shape combiner surface is a priori unknown. It uses to compensate the three main types of the HUD optical combiner distortion characteristic:
1) projective distortion with the "trapeze" type,
2) distortion with the "barrel" type and
3) projective distortion caused by zooming ratio.

Proposed parametric analytical expressions for the form of the predistortion that minimize these types of distortion. Because predistortion image pixels have fractional coordinates algorithm uses the bilinear interpolation to calculate the brightness of pixels with integer coordinates.

Distortion compensation parameters can be selected in manual mode with a fixed-step by criterion of visual comfort of the operator or, if using the technological camera installed in the exit pupil, by solving overdeterminated system of nonlinear equations using numerical optimization Levenberg - Marquardt algorithm.

The disadvantages of all the considered distortion compensation algorithms are the loss of the part of video frame useful information and resolution decrease.

Testing algorithm is executed on PC on a TV test pattern TIT-249 and ISO 12233 table with 1024x768 pixels resolution. Decrease of the horizontal and vertical resolution when predistortion is made is no more than 20 %. Proposed simplified algorithm for predistortion form provides the standard deviation of image pixels less than 1 pixel compared with high-order polynomial approximation of distortion.


1. H. Li, X. Zhang, G. Shi, H. Qu, Y. Wu, J. Zhang. Review and analysis of avionic helmet-mounted displays // Optical Engineering. 2013. Vol. 52 (11). P. 110901-1-110901-14.

2. A.V. Kozlov, I.G. Denisov, D.N. Sharifullina. Helmet-mounted display system [Electronic resource] // Future Engineering of Russia: Proceedings of VI Russian conf. young scientists and specialists. Moscow, The Bauman Technical University, 2013, 1 CD-ROM.

3. H. Hua, C. Gao, L. Brown, F. Biocca, J.P. Rolland. Design of an ultra-light head-mounted projective display (HMPD) and its applications in augmented collaborative environments // Stereoscopic displays and virtual reality systems. Proceedings of SPIE. 2002. Vol. 4660. P. 492-497.

4. A.V. Bakholdin, V.N. Vasilyev, V.A. Grimm, G.E. Romanova, S.A. Smirnov. Optical virtual display devices // Optical journal. 2013. No 5. P. 17-24.

5. J.E. Melzer, K.W. Moffitt. Head-mounted displays: designing for the user. McGraw-Hill, 1997, 352 p.

6. Secondary information projection display system based on aircraft and car [Electronic resource]. – URL: (request date 30.06.2014).

7. D. Malacara, Z Malacara. Handbook of optical design / 2nd edition. New York, Marcel Decker, 2004, 522 p.

8. R. Hartley, A. Zisserman. Multiple view geometry in computer vision / 2nd edition. Cambridge, Cambridge University Press, 2003, 656 ð.

9. R.Y. Tsai. A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf TV cameras and lenses // IEEE Journal on Robotics and Automation. 1987. RA-3(4). P. 323–344.

10. A.N. Lobanov. Photogrammetry / 2nd edition, revised and additional. Moscow, Nedra, 1984, 552 p.

11. V.P. Kovalenko, Yu.G. Veselov, I.V. Karpikov. Modern infrared systems estimation distortion method // Vestnik The Bauman Technical University. «Engeneering». 2011. No 1. P. 98-107.

12. I.S. Gruzman, V.S. Kirichuk, V.P. Kosykh, G.I. Peretyagin, A.A. Spector. Digital image processing in information systems. Novosibirsk, NSTU Publisher, 2000, 168 p.

13. Yu.V. Vizilter, S.Yu. Zheltov, A.V. Bondarenko, M.V. Ososkov, A.V. Morzhin. Image processing and analysis tasks in machine vision: a course of lectures and workshops. Moscow, “Phyzmathkniga” Publisher, 2010, 672 p.

Algorithm for
Creating Topographic Maps Using Artificial Neural Networks
M.V. Akinin, Ryazan State Radio Engineering University, Russia, Ryazan,

Keywords: artificial neural network, support vector machine, Kohonen's neural map, multilayer perceptron, grammar graph generation Kitano, genetic algorithm.

In this article the algorithm specification vector topographic maps for Earth remote sensing data. Proposed the use of the intellectual system based on support vector machines, multilayer perceptron, neural networks of direct propagation neural Kohonen map. An approach to learning neural network backpropagation using genetic algorithm and graph grammars generate China.

The task of establishing operational and updating of topographic maps is one of the most important tasks in various sectors of the economy, nature and safety of human life.

This problem can be solved through the analysis of remote sensing (RS) semi and fully automated intelligent systems that provide high accuracy for solving this problem and lower the time spent on the solution.

To test the viability of the proposed method specification of topographic maps has been several experiments, one of which will be discussed later.

The experiment is to clarify and supplement the topographic map polygon "Malynysche", located on the border of Ryazan and the Ryazan region Pronsky areas and so named by name Malynysche village located on this site.

As the satellite image was taken by a satellite image of the ground "Malynysche" made satellite «Landsat - 7" (camera «ETM +») on May 22 of 2000, based on it was created clarifies topographical map. The task of the algorithm is to clarify the original topographic map of the survey area "Malynysche" and to add the resulting map information about all the landfill. Refine and complete topographic map was performed by satellite images of 23 May 2006, made a companion «Landsat - 7" (camera «ETM +»).

In comparison with the topographic map, drawn to manually algorithm showed high accuracy of the result, misclassified less than 5% of the pixels.

On average, the accuracy of the algorithm reaches 93% of correctly classified pixels as compared to manual classification.

The developed algorithm has shown high efficiency temporary - a satellite image with dimensions can be processed by an average of 30 - 40 ms, which is sufficient to handle 25 frames per second (25 FPS) - a frequency sufficient to the formation of a video sequence, smooth and continuous visual system for the average person).

1. Shapiro L., Stockman G. Computer Vision. — Prentice Hall. - 2001.

2. Gonzalez R., Woods R. Digital Image Processing. - Prentice Hall. - 2007.

3. S. Haykin, Neural Networks: A Comprehensive Foundation, New Jersey: Prentice Hall, 1999.

4. Akinin M.V., Konkin Y.V. Technology thematic interpretation of satellite images based on support vector machines. // Informatics and Applied Mathematics: Interuniversity collection of scientific papers. - Ryazan: Ryazan State University. - 2010.

5. Akinin M.V., Konkin Y.V. The study approaches to learning multilayer perceptron. // Methods and tools for data processing and storage: Interuniversity collection of scientific papers. - Ryazan: Ryazan State Radio Engineering University. - 2012.

6. Aksenov S.V., Novoselchev V.B. Organization and the use of neural networks (methods and technologies). - Tomsk: NTL. - 2006. - 128 p.

7. Kitano H. Designing neural network using genetic algorithm with graph generation system // Complex Systems. Vol. 4. — 1990.

If you have any question please write: