|
«INFORMATICS AND APPLICATIONS» Scientific journal Volume 9, Issue 4, 2015
Content | About Authors
Abstract and Keywords.
- V. Yu. Korolev Faculty of Computational Mathematics and Cybernetics, M.V. Lomonosov Moscow State University,
1-52 Leninskiye Gory, GSP-1, Moscow 119991, Russian Federation, Institute of Informatics Problems, Federal Research Center “Computer Sciences and Control” of the Russian
Academy of Sciences, 44-2 Vavilov Str.,Moscow 119333, Russian Federation
- A. K. Gorshenin Institute of Informatics Problems, Federal Research Center “Computer Sciences and Control” of the Russian
Academy of Sciences, 44-2 Vavilov Str.,Moscow 119333, Russian Federation, Moscow State University of Information Technologies, Radioengineering, and Electronics, 78 Vernadskogo Ave.,
Moscow 119454, Russian Federation
- S. K. Gulev Faculty of Geography, M.V. Lomonosov Moscow State University, 1 Leninskiye Gory, GSP-1, Moscow 119991, Russian Federation, P. P. Shirshov Institute of Oceanology, 36 Nakhimovski Prosp., Moscow 117997, Russian Federation, University of Kiel, Christian-Albrechts-Universitat zu Kiel, 4 Christian-Albrechts-Platz, Kiel 24098, Germany
- K. P. Belyaev P. P. Shirshov Institute of Oceanology, 36 Nakhimovski Prosp., Moscow 117997, Russian Federation, Federal University of Bahia, RuaAdhemarde Barros, no 500, Ondina, 40.710-110, Salvador, Bahia, Brazil
Abstract: The method of moving separation of mixtures is applied to the problem of statistical modeling of regularities in explicit and latent turbulent heat fluxes. The six-hour observations in the Atlantic region (NCEP- NCAR, 1948-2008) are used as initial data. The basic approximate mathematical model is a finite normal mixture with parameters depending on time. The methodology of moving separation of mixtures allows one to analyze the regularities in the variation of parameters and to capture the variability which can be associated with the trend as well as the irregular variation. An approach is proposed to the determination of the proportion of extreme observations in the original sample.
Keywords: finite normal mixtures; moving separation of mixtures; probabilistic models; data mining
- V. Yu. Korolev Faculty of Computational Mathematics and Cybernetics, M.V. Lomonosov Moscow State University,
1-52 Leninskiye Gory, GSP-1, Moscow 119991, Russian Federation, Institute of Informatics Problems, Federal Research Center “Computer Sciences and Control” of the Russian
Academy of Sciences, 44-2 Vavilov Str.,Moscow 119333, Russian Federation
- A. Yu. Korchagin Institute of Informatics Problems, Federal Research Center “Computer Sciences and Control” of the Russian
Academy of Sciences, 44-2 Vavilov Str.,Moscow 119333, Russian Federation
- I. A. Sokolov Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: Some aspects of the application of generalized variance gamma distributions for modeling statistical regularities in financial markets are discussed. The paper describes elementary properties ofgeneralized variance gamma distributions as special normal variance-mean mixtures in which mixing distributions are the generalized gamma laws. Limit theorems for sums of a random number of independent random variables are presented that are analogs of the law of large numbers and the central limit theorem. These theorems give grounds for the possibility of using generalized variance gamma distributions as asymptotic approximations. The paper presents
the results of practical fitting of generalized variance gamma distributions to real data concerning the behavior of financial indexes as well as of fitting generalized gamma distributions to the observed intensities of information flows in contemporary financial information systems. The results of comparison of generalized gamma models with generalized hyperbolic models demonstrate the superiority of the former over the latter. The methods for parameter estimation of generalized gamma models are also discussed as well as their application for predicting processes in financial markets.
Keywords: random sum; normal mixture; normal variance-mean mixture; generalized hyperbolic distribution; generalized variance-gamma distribution; generalized gamma distribution; law of large numbers; central limit theorem
- D. N. Zmejev Institute for Design Problems in Microelectronics, Russian Academy of Sciences, 3 Sovetskaya Str., Moscow 124365, Russian Federation
- A. V. Klimov Institute for Design Problems in Microelectronics, Russian Academy of Sciences, 3 Sovetskaya Str., Moscow 124365, Russian Federation
- N. N. Levchenko Institute for Design Problems in Microelectronics, Russian Academy of Sciences, 3 Sovetskaya Str., Moscow 124365, Russian Federation
- A. S. Okunev Institute for Design Problems in Microelectronics, Russian Academy of Sciences, 3 Sovetskaya Str., Moscow 124365, Russian Federation
- A. L. Stempkovsky Institute for Design Problems in Microelectronics, Russian Academy of Sciences, 3 Sovetskaya Str., Moscow 124365, Russian Federation
Abstract: The article describes the parallel dataflow computing system with dynamic forming of context and the architectural features of its implementation. The dataflow computing model allows to solve the problems that arise when creating and using supercomputers. One of these problems is the difficulty of loading the rising number of functional units of cores, while remaining in the borders of traditional programming. The article describes
the advantages of the proposed computing model. The article compares the paradigm of "distribution" with the traditional paradigm of "collection." The dataflow computing model works in the paradigm of "distribution." The description of basic features of the architecture of the "Buran" parallel dataflow computing system and its differences from the classical flow computing systems are given. The studies give hope that the proposed computing model will become in the future the main programming paradigm for large-scale parallel computing.
Keywords: dataflow computing system; paradigm of distribution; content-addressable memory; localization of computation
- I. N. Sinitsyn Institute of Informatics Problems, Federal Research Center “Computer Sciences and Control” of the Russian
Academy of Sciences, 44-2 Vavilov Str.,Moscow 119333, Russian Federation
Abstract: Methods of analytical modeling (MAM) for processes in dynamical systems with complex Bessel nonlinearities with harmonically and stochastically wide and narrow band disturbances are given. Neccessary elements of the cylindric Bessel functions theory and complex Bessel nonlinearities are presented. Methodological and algorithmical support for MAM based on the statistical linearization method (SLM) and the normal approximation method for wide-band stochastic processes (white noise) is developed. Pecularities of MAM for harmonical and narrow band stochastic processes are discussed. Test examples of one-dimensional systems with additive and multiplicative noises and Bessel nonlinearities and for Bessel oscillator with various disturbances are given. Conclusions and some generalizations are mentioned.
Keywords: Bessel nonliearity; Bessel oscillator; complex Bessel nonlinearity; Gibbs formula; harmonical process; Kummer function; method of analytical modeling; narrow band stochastic processes; normal approximation method (NAM); statistical linearization method (SLM); stochastic system on manifold (MStS); white noise; wide band stochastic process
- 0. G. Vikhrova Peoples' Friendship University of Russia, 6 Miklukho-Maklaya Str., Moscow 117198, Russian Federation
- K. E. Samouylov Peoples' Friendship University of Russia, 6 Miklukho-Maklaya Str., Moscow 117198, Russian Federation
- E. S. Sopin Peoples' Friendship University of Russia, 6 Miklukho-Maklaya Str., Moscow 117198, Russian Federation
- S. Ya. Shorgin Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: Analytics predict that worldwide mobile traffic growth rate will exceed fixed traffic approximately three
times from 2014 to 2019. Number of mobile users will increase up to 4.9 billions and mobile devices number will
exceed 10 billions. The average mobile network connection speed (1.7 Mbps in 2014) will reach nearly 4.0 Mbps
by 2019. Special attention should be paid to mobile video traffic that will reach three-fourths of the whole mobile
traffic by 2019. These tendencies bring new challenges for mobile communication providers to increase efficiency
and additivity of radio resource allocation. In this connection, the paper analyzes a simplified model that allows
one to obtain analytical estimates of the blocking probability and the average value of occupied resources according
to the resource allocation policy of the LTE-Advanced technology.
Keywords: LTE-Advance; resource allocation policy; limited resource queue
- M. G. Konovalov Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- R. V. Razumchik Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation, Peoples' Friendship University of Russia, 6 Miklukho-Maklaya Str., Moscow 117198, Russian Federation
Abstract: The review of research papers devoted to the analysis of the dispatching problem in queueing systems is presented. The analysis is restricted to the class of systems with independent, operating in parallel, fully reliable servers, stochastic incoming flows of customers without any preceding constraints. The general goal of the analysis carried out in most of the papers is the solution of an optimization problem, which specification heavily depends on additional assumptions made. The models considered in the review are classified into several classes depending on the amount of a priori information and observability at decision times and performance criteria. The description of the dispatching algorithms most commonly found in literature and their properties is given. The main methods used for the analysis of the systems under these dispatching algorithms are reviewed. This review is intended to draw attention of the research community to one of the important problems in the field of information processing.
Keywords: dispatching; scheduling; parallel service; queueing system; optimization
- R. V. Razumchik Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation, Peoples' Friendship University of Russia, 6 Miklukho-Maklaya Str., Moscow 117198, Russian Federation
Abstract: Consideration is given to the single-server queueing system (QS) with a Poisson flow of (ordinary) customers and Poisson flow of negative customers. There is a queue of capacity k (0 < k < to), where ordinary customers wait for service. If an ordinary customer finds the queue full upon an arrival, it is considered to be lost. Each negative customer upon arrival moves one ordinary customer from the queue, if it not empty, to another queue (bunker) of capacity r (0 < r < to) and after that it leaves the system. If upon arrival of a negative customer the queue is not empty and the bunker is full, the negative customer and one ordinary customer from the queue leave the system. In all other cases, an arrival of a negative customer has no effect on the system. Customers from bunker are served with relative priority (i. e., a customer from bunker enters server if only there are no customers in the queue to be served). Service times of customers from both the queue and the bunker are exponentially distributed with the same parameter. Purely algebraic method based on generating functions, Chebyshev and Gegenbauer polynomials for approximate calculation of joint stationary probability distribution is presented for the case k = r. Numerical examples, showing both pros and cons of the method are provided.
Keywords: queueing system; negative customers; Gegenbauer polynomials; stationary distribution; approximation
- S. Frenkel Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation, Moscow State University of Information Technologies, Radioengineering, and Electronics, 78 Vernadskogo Ave., Moscow 119454, Russian Federation
- M. Kopeetsky Department of Software Engineering, Shamoon College of Engineering, Basel/Bialik Sts, Beer-Sheva, Israel
- R. Molotkovski Department of Software Engineering, Shamoon College of Engineering, Basel/Bialik Sts, Beer-Sheva, Israel
- P. Borovsky Department of Software Engineering, Shamoon College of Engineering, Basel/Bialik Sts, Beer-Sheva, Israel
Abstract: The paper proposes two novel schemes which improve the dictionary-based Lempel-Ziv-Welch (LZW) compression algorithm. The first scheme proposes an improvement over the LZW algorithm by applying an exponential decay (ED) technique as a tool to manage and remove infrequently used entries in the LZW dictionary The presented results demonstrate that ED may be an efficient tool to manage and refresh the LZW dictionary. The achieved compression ratio (CR) is higher than in the traditional methods like Dictionary Reset (DR) and Least Recently Used (LRU). Another approach uses the Distance from Last Use (DLU) method. The DLU can be compressed by Huffman coding based on the frequencies of the phrases. The compression scheme, called HCD (Huffman Coding of Distance), was tested on different real-life data types such as text, programming code, audio, video and image files, characterized by different Shannon entropy The experimental results demonstrate that the ED and HCD scheme may provide higher CR, compared with the LZW algorithm.
Keywords: (LZW) Dictionary Compression; dynamic dictionary; dictionary reset DR; least recently used LRU; exponential decay ED
- A. A. Grusho Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- N. A. Grusho Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- E. E. Timonina Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: The paper is devoted to estimations of capacity and rate of information transfer in covert channels of a special type. These covert channels are generated with the help of signals (tags) which do not bear semantic information, but are easily allocated on the reception end. Such tags are a kind of bans from the point of view of admissible values of all parameters connected with transfer of legal information. The covert information is coded by lengths of fragments of the data in legal transfer allocated by tags.
Keywords: covert channels; probability-theoretic models of covert channels; covert channels generated by tags; capacity of a covert channel; rate of information transfer
- A. A. Grusho Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- M. I. Zabezhailo All-Russian Institute for Scientific and Technical Information of Russian Academy of Sciences, 20 Usievicha Str., Moscow 125190, Russian Federation
- A. A. Zatsarinny Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: Control of information flows is one of the most important mechanisms of security implementation in cloud computing. Some SDN (Software Defined Networks) based technologies for packet forwarding in the cloud computing environment are presented and analyzed. The special problem-oriented procedure for monitoring and quick control of forwarding table reconfiguration is discussed. This procedure is responsible for control of requirements of network security policy in the dynamically changing cloud computing environment. The presented control procedures are based on Lexical Ordering (LO) of forwarding table (FT) and FT transformation in Binary Tree (BT). Some estimations of computational complexity of the presented header analysis by BT and LO are discussed. Possibilities to extend the presented algorithmic approach from header analysis to Deep Packet Inspection (DPI) and some related problems are announced and evaluated.
Keywords: cloud computing; software defined networks; scalability of network resources; dynamic changes in network topology; information security in cloud computing environment; header analysis
- I. А. Kirikov Kaliningrad Branch of the Federal Research Center "Computer Science and Control" of the Russian Academy
of Sciences, 5 Gostinaya Str., Kaliningrad 236000, Russian Federation
- А. V. Kolesnikov Kaliningrad Branch of the Federal Research Center "Computer Science and Control" of the Russian Academy
of Sciences, 5 Gostinaya Str., Kaliningrad 236000, Russian Federation, Immanuel Kant Baltic Federal University, 14 Nevskogo Str., Kaliningrad 236041, Russian Federation
- S. V. Listopad Kaliningrad Branch of the Federal Research Center "Computer Science and Control" of the Russian Academy
of Sciences, 5 Gostinaya Str., Kaliningrad 236000, Russian Federation
- S. B. Rumovskaya Kaliningrad Branch of the Federal Research Center "Computer Science and Control" of the Russian Academy
of Sciences, 5 Gostinaya Str., Kaliningrad 236000, Russian Federation
Abstract: The paper considers the problematic of interdisciplinary tools and the property of "grain" for hybrids in
informatics. The results are presented within the linguistic approach, the core of which is transformation of the
verbalized information about objects-originals (complex subjects) and objects-prototypes (modeling approaches)
to objects-results (functional hybrid intelligent system). It exists in polylanguages of professional activity. The
transformation is directed by heuristics, which are the schemes of the conceptual role models in the informal
axiomatic theory. The category core of the theory is "resource-property-operation-relation." Mono-, bi-, and
tri-role constructs are specified over its extension, which are the basic elements. On the basis of these elements
the schemes of reflection of information about resources, operations, situations, state of object of management,
complex tasks of management entity, and fine-grained hybrids of modeling entity are built.
Keywords: logical-mathematical intelligence; hybrid intelligent systems; linguistic approach; theory of role
conceptual models
- V. G. Ushakov Department of Mathematical Statistics, Faculty of Computational Mathematics and Cybernetics, M.V. Lomonosov Moscow State University, 1-52 Leninskiye Gory, Moscow 119991, GSP-1, Russian Federation, Institute of Informatics Problems, Federal Research Center "Computer Science and Control" of Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- N. G. Ushakov Institute of Microelectronics Technology and High-Purity Materials of the Russian Academy of Sciences, 6 Academician Osipyan Str., Chernogolovka, Moscow Region 142432, Russian Federation, Norwegian University of Science and Technology, 15A S.P. Andersensvei, Trondheim 7491, Norway
Abstract: Each observed value is registered with finite accuracy which is determined by the sensitivity of the equipment. It is expected that rounding errors could play an important role in the estimation of the mean of the observed value. On the other hand, the researcher usually has a possibility to affect the observation before its registration, for example, to intensify it or to add some additional component. This paper studies the relationship between the measurement error, rounding error, and the accuracy of the reconstruction of the observed value for the case of averaging of repeated measurements. It is demonstrated that under a fixed rounding level, in some sense, the greater the measurement error, the higher the reconstruction accuracy.
Keywords: rounded data; law of large numbers; total variation; decomposition of probability distributions
|
|