reference
stringlengths
376
444k
target
stringlengths
31
68k
Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> We rephrase the classical theory of composition algebras over fields, particularly the Cayley-Dickson Doubling Process and Zorn's Vector Matrices, in the setting of locally ringed spaces. Fixing an arbitrary base field, we use these constructions to classify composition algebras over (complete smooth) curves of genus zero. Applications are given to composition algebras over function fields of genus zero and polynomial rings. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> I. Underpinnings. II. Division Algebra Alone. III. Tensor Algebras. IV. Connecting to Physics. V. Spontaneous Symmetry Breaking. VI. 10 Dimensions. VII. Doorways. VIII. Corridors. Appendices. Bibliography. Index. <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> 1. Introduction 2. Non-associative algebras 3. Hurwitz theorems and octonions 4. Para-Hurwitz and pseudo-octonion algebras 5. Real division algebras and Clifford algebra 6. Clebsch-Gordon algebras 7. Algebra of physical observables 8. Triple products and ternary systems 9. Non-associative gauge theory 10. Concluding remarks. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> Oil from subsurface tar sand having an injection means in fluid communication with a production means is recovered by injecting a water-external micellar dispersion at a temperature above 100 DEG F., into the tar sands, displacing it toward the production means and recovering the oil through the production means. The micellar dispersion can be preceded by a slug of hot water which can optionally have a pH greater than about 7. Also, the micellar dispersion can have a pH of about 7-14 and preferably a temperature greater than about 150 DEG F. The micellar dispersion contains hydrocarbon, surfactant, aqueous medium, and optionally cosurfactant and/or electrolyte. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Octonions <s> This book investigates the geometry of quaternion and octonion algebras. Following a comprehensive historical introduction, the book illuminates the special properties of 3- and 4-dimensional Euclidean spaces using quaternions, leading to enumerations of the corresponding finite groups of symmetries. The second half of the book discusses the less familiar octonion algebra, concentrating on its remarkable "triality symmetry" after an appropriate study of Moufang loops. The authors also describe the arithmetics of the quaternions and octonions. The book concludes with a new theory of octonion factorization. Topics covered include the geometry of complex numbers, quaternions and 3-dimensional groups, quaternions and 4-dimensional groups, Hurwitz integral quaternions, composition algebras, Moufang loops, octonions and 8-dimensional geometry, integral octonions, and the octonion projective plane. <s> BIB005
In order to solidify the non-associative ring theory, the origin of the non-associative ring could be traced to the work of John T. Graves who discovered Octonions in 1843, which is considered to be the first ever example of non-associative ring. It is an 8-dimensional algebra over R which is non-associative as well as being non-commutative. These were rediscovered by cayley in 1845 and are also known sometimes as the cayley numbers. Each nonzero element of octonion still has an inverse so that it is a division ring, albeit a non-associative one. For a most comprehensive account of the octonions see [9] . The process of going from R to C, from C to H, and from H to O, is in each case a kind of doubling process. At each stage something is lost from R to C it loosed the property that R is ordered, from C to H loosed commutativity and from H to O loosed associativity. This process has been generalized to algebras over fields and indeed over rings. It is called Dickson doubling or Cayley-Dickson Doubling see BIB005 BIB001 . If we apply the Cayley-Dickson doubling process to the octonions we obtain a structure called the sedenions, which is a 16-dimensional non-associative algebra. In physics community much work is currently focused on octonion models see BIB002 BIB003 BIB004 . Historically speaking, the inventors or discoverers of the quaternions, octonions and related algebras (Hamilton, Cayley, Graves, Grassmann, Jordan, Clifford and others) were working from a physical point-of-view and wanted their abstractions to be helpful in solving natural problems .
Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> The object of this paper is to give a new proof of the theorem that every Lie algebra over a field K of characteristic zero, has a faithful representation. The first proof of this result, at least when K is algebraically closed, is due to Ado (1). Later Cartan (2) gave a simpler and entirely different proof for the case when K is the field of either real or complex numbers. Cartan's proof depends on the integration of the Maurer-Cartan equations and therefore is of a non-algebraic character.! The present proof is of course algebraic and seems to differ from the earlier ones in approaching the problem quite directly. Also the result established is slightly sharper than the usual one in so far as we assert the existence of a faithful representation in which every element of the maximal nilpotent ideal of the given Lie algebra is mapped on a nilpotent matrix. I am very much indebted to Professor C. Chevalley for his advice and help in improving the presentation of the proof. Also I should like to thank Dr. G. D. Mostow for many interesting and valuable discussions. All algebras (whether Lie algebras or associative algebras) and vector spaces appearing in this paper are to be understood over the basic field K. A linear Lie algebra 2 is a Lie algebra whose elements are endomorphisms of some given vector space, the bracket operation in 2 being defined by [X,Y] = XY YX. As far as possible we follow the notation and terminology of Chevalley's book (3) and his papers (4). In particular, if 2 is a Lie algebra and X e 2 we denote by ad X the derivation of 2 defined by (ad X)Y = [XY](Y ).2 The following notion of the semidirect sum of a Lie algebra and its algebra of derivations' is important for our purpose. DEFINITION. Let S be a Lie algebra and i) the algebra of its derivations. By the semidirect sum of V and Z is meant a Lie algebra 2 + ) defined as follows. Considered as a vector space 2 + Z is the direct sum of 2 and Z so that an element of 2 + i) is a pair (X, D) with X e and D e Z. The bracket operation in S + Z is defined by <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Let L be a Lie ring and denote the product of x and y in L by [ x , y ]. The ring L is said to satisfy the Engel condition (cf. (1)), if for every pair of elements x, y e L there is an integer k = k ( x , y )such that If k ( x , y ) can be taken equal to a fixed integer n for all x , y e L then L is said to satisfy the n-th Engel condition . <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Given any associative ring A we can form, using its operations and its elements, two new rings. These use the elements of A and the addition as defined in A, but new multiplications are introduced to render them rings, albeit not necessarily associative rings. The first of these, the Lie ring AL of A uses a multiplication defined by [a, b] =ab ba for any a, b e A where ab is the ordinary associative product of elements in A. The second of these, the Jordan ring of A, A', has its multiplication defined by ao bab + ba for any pair of elements a, b in A. Being defined in a manner so decidedly dependent on the associative product of A, it is natural to expect that an intimate relationship should exist between the structure of these two new rings and that of A. In this paper we study one phase of this relationship, namely the connection between the ideal structure of A as an associative ring with the ideal structure of AL and A' as Lie and Jordan rings respectively. To be more specific, we investigate how simplicity of A as an associative ring reflects into analogous properties of AL and A,. When we say that U is an ideal of Al, or, equivalently, when we say that U is a Jordan ideal of A, we mean that U is an additive subgroup of A and that for any xeU and any yeA, xoy=xy+yx is an element of U. We similarly define Lie ideals of A and ideals of AL. Although the main results of this paper deal with the case in which A is a simple ring, many of the other results do not require the assumption of simplicity in order to remain valid; so, unless otherwise stated, we make no assumption of simplicity for A. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Introduces the concepts and methods of the Lie theory in a form accesible to the non-specialist by keeping the mathematical prerequisites to a minimum. The book is directed towards the reader seeking a broad view of the subject rather than elaborate information about technical details. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> In this paper the Lie structure of prime rings of characteristic 2 is discussed. Results on Lie ideals are obtained. These results are then applied to the group of units of the ring, and also to Lie ideals of the symmetric elements when the ring has an involution. This work extends recent results of I. N. Herstein, C. Lanski and T. S. Erickson on prime rings whose characteristic is not 2, and results of S. Montgomery on simple rings of characteristi c 2. 1* Prime rings* We first extend the results of Herstein [5]. Unless otherwise specified, all rings will be associative. If R is a ring, R has a Lie structure given by the product [x, y] = xy — yx, for x,yeR. A Lie ideal of R is any additive subgroup U of R with [u, r]e U for all u e U and reR. By a commutative Lie ideal we mean a Lie ideal which generates a commutative subring of R. Denote the center of R by Z. We recall that if R is prime, then the nonzero elements of Z are not zero divisors in R. In this case, if Z Φ 0 and F is the quotient field of R, then R ®ZF is a prime ring, every element of which can be written in the form r (x) α"1 for ae Z, a Φ 0. Thus R®ZF is naturally isomorphic to RZ~~\ the localization of R at Z. We will consider R imbedded in RZ~ι in the usual way (see [2]). We begin with some easy lemmas. LEMMA 1. If R is semi prime and U is a Lie ideal of R with u2 = 0 for all ue U, then U = 0. <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Let $R$ be an associative ring with centre $Z$. The aim of this paper is to study how the ideal structure of the Lie ring of derivations of $R$, denoted $D(R)$, is determined by the ideal structure of $R$. If $R$ is a simple (respectively semisimple) finite-dimensional $Z$-algebra and δ$(z)$ = 0 for all δ ∈ $D(R)$, then every derivation of $R$ is inner and $D(R)$ is known to be a simple (respectively semisimple) Lie algebra (see [7, 5]). Here we are interested in extending these results to the case where $R$ is a prime or semi-prime ring. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> In [2] we proved that ifG is a finite group containing an involution whose centralizer has order bounded by some numberm, thenG contains a nilpotent subgroup of class at most two and index bounded in terms ofm. One of the steps in the proof of that result was to show that ifG is soluble, then ¦G/F(G) ¦ is bounded by a function ofm, where F (G) is the Fitting subgroup ofG. We now show that, in this part of the argument, the involution can be replaced by an arbitrary element of prime order. <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Abstract It is proved that if a locally nilpotent group G admits an almost regular automorphism of prime order p then G contains a nilpotent subgroup G 1 such that | G : G 1 |≤ƒ( p , m ) and the class of nilpotency of G 1 ƒ g ( p ), where ƒ is a function on p and the number of fixed elements m and g depends on p only. An analog is proved for Lie rings (not necessarily locally nilpotent). These give an affirmative answer to the questions raised by Khukhro. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> We consider locally nilpotent periodic groups admitting an almost regular automorphism of order 4. The following are results are proved: (1) If a locally nilpotent periodic group G admits an automorphism ϕ of order 4 having exactly m<∞ fixed points, then (a) the subgroup {ie176-1} contains a subgroup of m-bounded index in {ie176-2} which is nilpotent of m-bounded class, and (b) the group G contains a subgroup V of m-bounded index such that the subgroup {ie176-3} is nilpotent of m-bounded class (Theorem 1); (2) If a locally nilpotent periodic group G admits an automorphism ϕ of order 4 having exactly m<∞ fixed points, then it contains a subgroup V of m-bounded index such that, for some m-bounded number f(m), the subgroup {ie176-4}, generated by all f(m) th powers of elements in {ie176-5} is nilpotent of class ≤3 (Theorem 2). <s> BIB009 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Abstract In this paper we prove that there are functions f ( p , m , n ) and h ( m ) such that any finite p -group with an automorphism of order p n , whose centralizer has p m points, has a subgroup of derived length ⩽ h ( m ) and index ⩽ f ( p , m , n ). This result gives a positive answer to a problem raised by E. I. Khukhro (see also Problem 14.96 from the “Kourovka Notebook” (1999, E. I. Khukhro and V. D. Mazurov (Eds.), “The Kourovka Notebook: Unsolved Problems in Group Theory,” 14th ed., Novosibirsk)). <s> BIB010 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Lie algebra is an area of mathematics that is largely used by electrical engineer students, mainly at post-graduation level in the control area. The purpose of this paper is to illustrate the use of Lie algebra to control nonlinear systems, essentially in the framework of mobile robot control. The study of path following control of a mobile robot using an input-output feedback linearization controller is performed. The effectiveness of the nonlinear controller is illustrated with simulation examples. <s> BIB011 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Let L be a Lie ring or a Lie algebra of arbitrary, not necessarily finite, dimension. Let φ be an automorphism of L and let CL(φ) = {a ∈ L | φ(a) = a} denote the fixed-point subring. The automorphism φ is called regular if CL(φ)= 0, that is, φ has no non-trivial fixed points. By Kreknin’s theorem [20] if a Lie ring L admits a regular automorphism φ of finite order k, that is, such that φ = 1 and CL(φ)= 0, then L is soluble of derived length bounded by a function of k, actually, by 2 − 2. (Earlier Borel and Mostow [3] proved the solubility in the finite-dimensional case, without a bound for the derived length.) In the present paper we prove that if a Lie ring admits an automorphism of prime-power order that is “almost regular,” then L is “almost soluble.” <s> BIB012 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> The well-known theorem of Borel–Mostow–Kreknin on solubility of Lie algebras with regular automorphisms is generalized to the case of almost regular automorphisms. It is proved that if a Lie algebra L admits an automorphism ϕ of finite order n with finite-dimensional fixed-point subalgebra of dimension dimCL(ϕ)=m, then L has a soluble ideal of derived length bounded by a function of n whose codimension is bounded by a function of m and n (Theorem 1). A virtually equivalent formulation is in terms of a (Z/nZ)-graded Lie algebra L whose zero component L0 has finite dimension m. The functions of n and of m and n in Theorem 1 can be given explicit upper estimates. The proof is of combinatorial nature and uses the criterion for solubility of Lie rings with an automorphism obtained in [E.I. Khukhro, Siberian Math. J. 42 (2001) 996–1000]. The method of generalized, or graded, centralizers is developed, which was originally created in [E.I. Khukhro, Math. USSR Sbornik 71 (1992) 51–63] for almost regular automorphisms of prime order. As a corollary we prove a result analogous to Theorem 1 on locally nilpotent torsion-free groups admitting an automorphism of finite order with the fixed points subgroup of finite rank (Theorem 3). We also prove an analogous result for Lie rings with an automorphism of finite order having finitely many fixed points (Theorem 2). <s> BIB013 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Isomorphisms between finitary unitriangular groups and those of associated Lie rings are studied. In this paper we investigate exceptional cases. <s> BIB014 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> We improve the conclusion in Khukhro's theorem stating that a Lie ring (algebra) L admitting an automorphism of prime order p with finitely many m fixed points (with finite-dimensional fixed-point subalgebra of dimension m) has a subring (subalgebra) H of nilpotency class bounded by a function of p such that the index of the additive subgroup |L: H| (the codimension of H) is bounded by a function of m and p. We prove that there exists an ideal, rather than merely a subring (subalgebra), of nilpotency class bounded in terms of p and of index (codimension) bounded in terms of m and p. The proof is based on the method of generalized, or graded, centralizers which was originally suggested in [E. I. Khukhro, Math. USSR Sbornik 71 (1992) 51–63]. An important precursor is a joint theorem of the author and E. I. Khukhro on almost solubility of Lie rings (algebras) with almost regular automorphisms of finite order. <s> BIB015 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> A classical nilpotency result considers finite p-groups whose proper subgroups all have class bounded by a fixed number n. We consider the analogous property in nilpotent Lie algebras. In particular, we investigate whether this condition puts a bound on the class of the Lie algebra. Some p-group results and proofs carry over directly to the Lie algebra case, some carry over with modified proofs and some fail. For the final of these cases, a certain metabelian Lie algebra is constructed to show a case when the p-groups and Lie algebra cases differ. <s> BIB016 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> V.M. Kurochkin [1] has formulated the following theorem: Every Σ-operator Lie ring L has a faithful representation in an associative Σ-operator ring A, where Σ is an arbitrary domain of operators for the ring L. In a subsequent note [2], V.M. Kurochkin pointed out the insufficient rigor of the proof he proposed for this theorem. <s> BIB017 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> In this chapter we shall make a study of rings satisfying certain ascending chain conditions. In the non-commutative case-and this is really the only case with which we shall be concerned- the decisive and incisive results are three theorems due to Goldie. The main part of the chapter will be taken up with a presentation of these. <s> BIB018 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> In this paper, we study Lie and Jordan structures in simple Γ-rings of characteristic not equal to two. Some properties of these Γ-rings are developed. <s> BIB019 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> The object of this paper is to study Lie structure in simple gamma rings. We obtain some structural results of simple gamma rings with Lie ideals. <s> BIB020 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Suppose that a finite group $G$ admits a Frobenius group of automorphisms $FH$ with kernel $F$ and complement $H$ such that the fixed-point subgroup of $F$ is trivial: $C_G(F)=1$. In this situation various properties of $G$ are shown to be close to the corresponding properties of $C_G(H)$. By using Clifford's theorem it is proved that the order $|G|$ is bounded in terms of $|H|$ and $|C_G(H)|$, the rank of $G$ is bounded in terms of $|H|$ and the rank of $C_G(H)$, and that $G$ is nilpotent if $C_G(H)$ is nilpotent. Lie ring methods are used for bounding the exponent and the nilpotency class of $G$ in the case of metacyclic $FH$. The exponent of $G$ is bounded in terms of $|FH|$ and the exponent of $C_G(H)$ by using Lazard's Lie algebra associated with the Jennings--Zassenhaus filtration and its connection with powerful subgroups. The nilpotency class of $G$ is bounded in terms of $|H|$ and the nilpotency class of $C_G(H)$ by considering Lie rings with a finite cyclic grading satisfying a certain `selective nilpotency' condition. The latter technique also yields similar results bounding the nilpotency class of Lie rings and algebras with a metacyclic Frobenius group of automorphisms, with corollaries for connected Lie groups and torsion-free locally nilpotent groups with such groups of automorphisms. Examples show that such nilpotency results are no longer true for non-metacyclic Frobenius groups of automorphisms. <s> BIB021 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Suppose that a finite group $G$ admits a Frobenius group of automorphisms FH of coprime order with cyclic kernel F and complement H such that the fixed point subgroup $C_G(H)$ of the complement is nilpotent of class $c$. It is proved that $G$ has a nilpotent characteristic subgroup of index bounded in terms of $c$, $|C_G(F)|$, and $|FH|$ whose nilpotency class is bounded in terms of $c$ and $|H|$ only. This generalizes the previous theorem of the authors and P. Shumyatsky, where for the case of $C_G(F)=1$ the whole group was proved to be nilpotent of $(c,|H|)$-bounded class. Examples show that the condition of $F$ being cyclic is essential. B. Hartley's theorem based on the classification provides reduction to soluble groups. Then representation theory arguments are used to bound the index of the Fitting subgroup. Lie ring methods are used for nilpotent groups. A similar theorem on Lie rings with a metacyclic Frobenius group of automorphisms $FH$ is also proved. <s> BIB022 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> We exhibit an explicit construction for the second cohomology group $H^2(L, A)$ for a Lie ring $L$ and a trivial $L$-module $A$. We show how the elements of $H^2(L, A)$ correspond one-to-one to the equivalence classes of central extensions of $L$ by $A$, where $A$ now is considered as an abelian Lie ring. For a finite Lie ring $L$ we also show that $H^2(L, \C^*) \cong M(L)$, where $M(L)$ denotes the Schur multiplier of $L$. These results match precisely the analogue situation in group theory. <s> BIB023 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> We generalize the common notion of descending and ascending central series. The descending approach determines a naturally graded Lie ring and the ascending version determines a graded module for this ring. We also link derivations of these rings to the automorphisms of a group. This uncovers new structure in 4/5 of the approximately 11.8 million groups of size at most 1000 and beyond that point pertains to at least a positive logarithmic proportion of all finite groups. <s> BIB024 </s> Literature Survey on Non-Associative Rings and Developments <s> Lie Rings (1870-2015) <s> Thank you very much for downloading introduction to lie algebras and representation theory. As you may know, people have search numerous times for their favorite books like this introduction to lie algebras and representation theory, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they cope with some harmful virus inside their desktop computer. <s> BIB025
In 1870 a very important non-associative class known as Lie Theory was introduced by the Norwegian mathematician Sophus Lie. The theory of Lie algebras is an area of mathematics in which we can see a harmonious between the methods of classical analysis and modern algebra. This theory, a direct outgrowth of a central problem in the calculus, has today become a synthesis of many separate disciplines, each of which has left its own mark. The importance of Lie algebras for applied mathematics and for applied physics has also become increasingly evident in recent years. In applied mathematics, Lie theory remains a powerful tool for studying differential equations, special functions and perturbation theory. Lie theory finds applications not only in elementary particle physics and nuclear physics, but also in such diverse fields as continuum mechanics, solid-state physics, cosmology and control theory. Lie algebra is also used by electrical engineers, mainly in the mobile robot control. For the basic information of Lie algebras, the readers are referred to BIB004 BIB011 BIB025 . It is well known that Lie algebra can be viewed as a Lie ring. So, the theory of Lie ring can be used in the theory of Lie algebra. A Lie ring is defined as a non-associative ring with multiplication that is anti-commutative and satisfies the Jacobi identity i.e.[a, [b, c] Although the Lie theory was introduced in 1870 but the major developments were made in the 20th century with the paper of Hausdorff in 1906. In (1934-35), Ado proved that any finite dimensional Lie algebra over the field of complex numbers can be represented in a finite dimensional associative algebra. Moreover, in 1937, Birkhoff and Witt independently examined that every Lie algebra is isomorphic to sub-algebra of some algebra of the form A (−) , where A (−) is a Lie ring defined by x.y = xy − yx. They also found a formula for computing the rank of the homogeneous modules in a free Lie algebra on a finite number of generators. Also in 1937, Magnus proved that the elements yi = 1 + xi of the ring H generate a free subgroup G of the multiplicative group of the ring H, and every element of the subgroup G n (the n-th commutator subgroup) has the form 1 + ln + w, where ln is some homogeneous Lie polynomial (with respect to the operations x.y and x + y of degree n in the generators ai, and w is a formal power series in which all the terms have degree greater than n. In 1947, Dynkin gave the criteria to determine whether the given polynomial is a Lie polynomial. Later in , Harish Chandra BIB001 and Iwasawa proved that Ado's theorem holds for any finite dimensional Lie algebra. Moreover, an important role in the theory of Lie rings is played by free Lie rings. In contrast to free alternative rings and free J-rings (free Jordan-rings), free Lie rings have been thoroughly studied. In that context, in 1950, Hall pointed out a method for constructing a basis of a free Lie algebra. In addition, analogous theorems about embedding of arbitrary algebras and of associative rings were proved respectively by Zhukov in 1950 and by Malcev in 1952. In (1953-54), Lazard and Witt [261] studied representations of -operator Lie rings in -operator associative rings. The existence of such a representation was proved by them in the case of -principal ideal rings and in particular for Lie rings without operators. The example constructed by Shirshov in BIB017 shows that there exist non-representable -operator Lie rings which do not have elements of finite order in the additive group. Also in 1954, Higgins investigated that solvable rings satisfying the n-th Engel condition are nilpotent and in continuation, Lazard studied nilpotent groups using large parts of the apparatus of Lie ring theory. In 1955, Cohn BIB002 constructed an example of a solvable Lie ring, with additive p-group (in characteristic p) and satisfying the p-th Engel condition, which is not nilpotent. Lie rings with a finite number of generators and some restrictions on the additive group. Also in 1955, Malcev [175] considered a class of binary-Lie rings, which are related to lie rings in a way analogous to the way alternative rings are related to associative rings. In (1955-56), Herstein BIB003 discussed associative rings which are dedicated to studying the rings A (−) with different assumptions on the ring A. In 1956, Witt proved that any sub-algebra of a free Lie algebra is again free. This theorem is analogous to the theorem of Kurosh for sub-algebras of free algebras. In the year 1957, many authors work on Lie algebra. For example, Higman proved nilpotency of any Lie ring which has an automorphism of prime order without nonzero fixed points. This statement allowed him to prove nilpotency of finite solvable groups which have an automorphism satisfying the analogous condition. Gainov , investigated that in the case of a ring for which the additive group has no elements of order two, for a ring to be binary-Lie it is sufficient that these identities hold: In (1957-58), Kostrikin proved that the Engel condition implies nilpotency. This result is especially interesting because from it follows the positive solution of the group-theoretical restricted Burnside problem for p-groups with elements of prime order . Herstein and Kleinfeld examined that if a Lie ring L admits a regular automorphism φ of finite order k, that is, such that φk = 1 and CL(φ) = 0, then L is soluble of derived length bounded by a function of k, actually, by 2k − 2. He also discussed the bounded solubility of a Lie ring with a fixed-point-free automorphism, but the existing Lie ring methods cannot be used for bounding the derived length in general. Moreover, Kreknin and Kostrikin in 1963 suggested that a Lie ring with a fixed-point-free automorphism of prime order p is nilpotent of p-bounded class. In continuation Kreknin and Kostrikin also investigated that a Lie ring (algebra) admitting a regular (i.e., without nontrivial fixed points) automorphism of prime order p is nilpotent of class bounded by a function h(p) depending only on p. Kreknin in 1967 projected that a Lie ring (algebra) admitting a regular automorphism of finite order n is soluble of derived length bounded by a function of n. In 1969, Herstein BIB018 focused his study on the structures of the Jordan and Lie rings of simple associative rings. In the latter case the approach is via the study of the structure of I(R), the Lie ring of inner derivations of R, or, equivalently, the Lie structure of R/Z. In 1970, Herstein studied lie structure of associative rings and proved some important results regarding lie structure of R/Z. In 1972, Lanski and Montgomery BIB005 studied Lie structure of prime rings of characteristic 2. Results on Lie ideals were obtained. These results were then applied to the group of units of the ring, and also to Lie ideals of the symmetric elements when the ring has an involution. This work extends recent results of Herstein, Lanski and Erickson on prime rings whose characteristic is not 2, and results of S. Montgomery on simple rings of characteristic 2. In 1974, Kawamoto discussed prime and semiprime ideals of Lie rings and showed that in a Lie algebra satisfying the maximal condition for ideals, any semi-prime ideal is an intersection of finite number of prime ideals and the unique maximal solvable ideal is equal to the intersection of all prime ideals. Jordan et al. BIB006 in 1978, studied that how the ideal structure of the Lie ring of derivations of R, is determined by the ideal structure of R. Moreover, the authors were interested in extending these results to the case where R is a prime or semi-prime ring. Hartley et al., BIB007 , in 1981 and Khukhro in 1986 proposed that the results on Lie rings with regular or almost regular automorphisms of prime order have consequences for nilpotent (or even finite, or residually locally nilpotent-by-finite, etc.) groups with such automorphisms. In 1992, Khukhro has generalized the work of Kreknin and Kostrikin on regular automorphisms; (almost) regularity of an automorphism of prime order implied (almost) nilpotency of the Lie ring (algebra), with corresponding bounds for the nilpotency class and the index (co-dimension). He also showed that a Lie ring (algebra) L admitting an automorphism φ of prime order p with finite fixed-point sub-ring of order m (with finite-dimensional fixedpoint sub-algebra of dimension m) has a nilpotent sub-ring (sub-algebra) K of class bounded by a function of p with the index of the additive subgroup |L : K| (the co-dimension of K) bounded by a function of m and p. Moreover, Khukhro proved that if a periodic (locally) nilpotent group G admits an automorphism φ of prime order p with m = |CG(φ)| fixed points then G has a nilpotent subgroup of (m, p)-bounded index and of p-bounded class and on the way this group result was also based on a similar theorem on Lie rings. The result given in , was later extended by Medvedev BIB008 in 1994 to not necessarily periodic locally nilpotent groups. In 1996 and in 1998, the authors BIB009 developed a method of graded centralizers given in to study the almost fixed-point-free automorphisms of Lie rings and nilpotent groups. Medvedev in 1999 Zapirain BIB010 in 2000 and Makarenko in 2001 established the most successful case regarding the nilpotent (or finite) p-groups with an almost regular automorphism of order p n , where theorems on regular automorphisms of Lie rings were used. Great progress has been made to date in Lie rings (algebras) with almost regular automorphisms. The history of this area of research started with the classical theorem of Kreknin. In 2003, Khukhro and Makarenko BIB012 proved that if a Lie ring admits an automorphism of prime-power order that is almost regular then L is almost soluble. Moreover, in 2003 and in 2004 Makarenko and Khukhro BIB013 , have succeeded in investigating the most general case of a Lie ring (algebra) with almost regular automorphism of arbitrary finite order. Makarenko and Khukhro BIB013 in 2004 analyzed that almost solubility of Lie rings and algebras admitting an almost regular automorphism of finite order, with bounds for the derived length and co-dimension of a soluble sub-algebra, but for groups even the fixed-point-free case remains open. In 2005, Kuzucuoglu BIB014 proved isomorphisms between finitary unitriangular groups and those of associated Lie rings are studied. The author also investigated its exceptional cases. Makarenko BIB015 in 2005, improved the conclusion in Khukhro's theorem stating that a Lie ring (algebra) L admitting an automorphism of prime order p with finitely many m fixed points (with finite-dimensional fixed-point sub-algebra of dimension m) has a sub-ring (sub-algebra) H of nilpotency class bounded by a function of p such that the index of the additive subgroup |L : H| (the co-dimension of H) is bounded by a function of m and p. He proved that there exists an ideal, rather than merely a sub-ring (sub-algebra), of nilpotency class bounded in terms of p and of index (co-dimension) bounded in terms of m and p. In 2008, Suanmali BIB016 used an analogous idea in the theory of group varieties to investigate the varieties of Lie algebras. She considered the exponent bound problem for some varieties of nilpotent Lie algebras and extended Macdonald's results to finite-dimensional Lie algebras over a field of characteristic not 2 and 3. Paul and Sabur Uddin BIB019 in 2010 worked on Lie and Jordan structure in simple gamma rings. They obtained some remarkable results concerning to Lie and Jordan structure. In 2010, Paul and Sabur Uddin BIB020 focused their discussion to the study Lie structure in simple gamma rings. They gave us some structural results of simple gamma rings with Lie ideals. In 2011, Khukhro, Makarenko and Shumyatsky BIB021 developed a Lie ring theory which is used for studying groups G and Lie rings L with a metacyclic Frobenius group of automorphisms F H. Wilson in 2013 introduced three families of characteristic subgroups that refined the traditional verbal subgroup filters, such as the lower central series, to an arbitrary length. It was proved that a positive logarithmic proportion of finite p-groups admit at least five such proper nontrivial characteristic subgroups whereas verbal and marginal methods explained only one. The placement of these subgroups in the lattice of subgroups is naturally recorded by a filter over an arbitrary commutative monoid M and induces an M -graded Lie ring. These Lie rings permit an efficient specialization of the nilpotent quotient algorithm to construct automorphisms and decide isomorphism of finite p-groups. In 2013, Khukhro and Makarenko BIB022 discovered that the representation theory arguments are used to bound the index of the fitting subgroup. Lie ring methods are used for nilpotent groups. A similar theorem on Lie rings with a metacyclic Frobenius group F H of automorphisms was also proved. In 2014, Horn and Zandi BIB023 the aim in their paper is to gave an explicit description of the cohomology group H 2 (L, A) and to show how its elements correspond one-to-one to the equivalence classes of central extensions of the Lie algebra L with the module A, where we regard A as abelian Lie ring. More recently in 2015, Wilson BIB024 generalized the common notion of descending and ascending central series. The descending approach determines a naturally graded Lie ring and the ascending version determines a graded module for this ring. He link derivations of these rings to the automorphisms of a group.
Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> A new catalyst component and its use with an organoaluminum compound, which component is a brown solid of high surface area and large pore volume comprising beta titanium trichloride and a small amount of an organic electron pair donor compound. This solid when used in conjunction with an organoaluminum compound to polymerize alpha-olefins produces product polymer at substantially increased rates and yields compared to present commercial, purple titanium trichloride while coproducing reduced amounts of low-molecular-weight and, particularly, amorphous polymer. Combinations of this new catalyst component and an organoaluminum compound can be further improved in their catalytic properties by addition of small amounts of modifiers, alone and in combination. Such combinations with or without modifiers show good sensitivity to hydrogen used as a molecular weight controlling agent. The combinations are useful for slurry, bulk and vapor phase polymerization of alpha-olefins such as propylene. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> One finds in the literature a thorough-going discussion of rings without radical and with minimal condition for left ideals (semi-simple rings). For the structure of a ring whose quotient-ring with respect to the radical is semi-simple one can refer to the investigations of K6the (see K below). In this paper we shall examine the structure of a ring A with radical R D 0 and with minimal condition for left ideals ("general" MLI ring). The key-stone of our investigations is the fact that the radical of A is nilpotent, and this result we shall establish in ?1. In ?2 we shall prove that the sum of all minimal non-zero left ideals is a completely-reducible left ideal 9A, and in ?3 we shall examine the distribution of idempotent and nilpotent left ideals in 9)1. In ??4-6 we shall discuss the two "extreme cases": (1) when A is nilpotent, and (2) when A is idempotent. For a non-nilpotent A we shall prove that the existence of either a right-hand or a left-hand identity is sufficient for the existence of a composition series of left ideals of A. If A is any MLI ring, one can find a smallest exponent k such that Ak = Ak+l = ... . In ?7 we show that A is the sum of Ak (which is idempotent) and a nilpotent MLI ring. We wish to emphasize the fact that A is to be regarded throughout as a ring without operators. In ?8, however, we shall see that some of our most interesting results are valid for operator domains of a certain type. We conclude the Introduction with an explanation of our notation and terminology. Rings and subrings will usually be denoted by roman capitals; we shall use gothic letters when it is desirable to emphasize the fact that a subring is an ideal. By the statement "a is a left (right) ideal of A" we shall mean that a is an additive abelian group which admits the elements of A as left-hand (right-hand) operators. Observe that our definition does not' imply that a is a subring of A. The term "left ideal," with no qualifying phrase, will always mean "left ideal of the basic ring." A ring with minimal condition for left (right) ideals which are contained in itself will be called an MLI (MRI) ring. Finally we point out that if a and b are subrings of A, then [a, b] denotes the cross-cut of a and b, while (a, b) represents the compound (join) of a and b-i.e. <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> An ink jet printer includes apparatus for preventing ink clogs from interfering with the flow of ink from a printing nozzle during a printing operation. The ink-declogging apparatus includes a vacuum pump having a vacuum chamber and a member movable with respect to such chamber to adjust the pressure therein. A printing nozzle is operably coupled to the chamber when the printing nozzle is not being used in a print operation. A stepper motor controls the position of the movable member relative to the chamber to selectively provide at least two different preset levels of vacuum (suction) to the printing nozzle. Having the capability of controlling the level of vacuum applied to the printing nozzle, a high vacuum need only be applied in situations warranting its use (e.g. to remove ink clogs), and the waste of ink (by an unnecessarily high vacuum) can be avoided. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> An energy efficient general purpose injection molding employs a hydraulic drive for injection and clamp machine functions and an electric drive for screw recovery. Both drives use AC "squirrel cage" induction motors under vector control. Speed command signals for the vector controls are generated by the machine's controller utilizing state transition and predictive signal techniques to account for motor and motor/pump response latencies. Hydraulic drive efficiency is improved by varying motor speed/pump output to match cycle requirements to retain hydraulic drive advantages for mold clamp and injection functions while improving the electric drive performance for screw recovery. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> In this paper we shall show that N. Jacobson's definition of the radical of an (associative) ring [J1, B] applies to alternative rings [Z1, M], and we shall develop some of the elementary properties of this radical. The radical of an alternative ring was first discussed by M. Zorn [Z2] under certain finiteness assumptions. Dubisch and Perlis [D P] have more recently studied the radical of an alternative algebra (of finite order). We make no finiteness assumptions. We do, however, prove that the chain conditions employed by Zorn [Z2, (4.2.1)(4.2.3)] ensure that the radical defined by him coincides with that defined in this paper. Our discussion applies equally well to algebras of possibly infinite order [JI, ?6] so that our results essentially contain some of the results of Dubisch and Perlis. We have been unable to discover the relation between the radical and maximal right ideals, nor have we developed any parallel for Jacobson's structure theory of associative rings. The enlarged radical of Brown and McCoy [B-McC] will yield a type of structure theory for arbitrary non-associative rings, but because of the generality involved we prefer to leave a discussion of this interesting fact to a subsequent publication. <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> A compression device for exerting pressure on an arm, shoulder, and/or trunk of a patient in need thereof (for example, a patient with hyperalgia or recovering from surgery in which the lymphatic system is affected), including an arm compression hose, a shoulder part for exerting pressure on the shoulder and trunk area, and a band-shaped fastening means for positioning the shoulder part and exerting pressure on the shoulder part. The arm compression hose exerts a pressure that decreases from a maximum pressure at the wrist or hand to a minimum pressure near the shoulder end of the arm, where the minimum pressure is approximately 70% of the maximum pressure. One or more lining pockets can be constructed on the inner lining of the compression device, where each lining pocket can hold one or more compression pads to increase tissue pressure in one or more body areas in need thereof. The compression pads each can have a shape that approximately conforms to the shape of the body part to which it is applied. The shoulder part can also have a shape that approximately conforms to the contour of the shoulder/trunk area to which it is applied. In addition, compression pants can be prepared with lining pockets for receiving compression pads. In one embodiment, compression pants include one or more donut-shaped pads or equivalents thereof that are placed in one or more lining pockets, each of which surrounds one or more osteoma openings. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> for all a, x in 21. I t is clear that associative algebras are alternative. The most famous examples of alternative algebras which are not associative are the so-called Cayley-Dickson algebras of order 8 over $. Let S be an algebra of order 2 over % which is either a separable quadratic field over 5 or the direct sum 5 ©3There is one automorphism z—>z of S (over %) which is not the identity automorphism. The associative algebra O = 3~\~S with elements <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> A small reed switch having a glass tubularly shaped envelope containing a pair of reed contacts is modified to ensure its being biased to either an open or a closed position. A donut-shaped piece of a rubber-bonded, barium-ferrite, magnetic material is circumferentially mounted on the reed switch at a position where its magnetic field influences the contacts to an open position or a closed position. Thusly arranged, an actuating magnet, having a sufficient field at a single pre-established distance from the contacts, actuates the switch from all radial directions from the switch. Potting the modified switch in an epoxy resin further ensures a greater reliability and makes it ideal for implantations in laboratory animals. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> One of the ways of gaining an insight into the nature of a class of rings is to determine all the simple ones. In the case of associative rings some restriction, such as the existence of maximal or minimal right ideals is usually made in order to characterize the simple ones, for otherwise one encounters seemingly pathological examples. The theory of simple alternative rings is therefore incongruous in that it presents no additional difficulties. More precisely, if one defines a ring to be simple provided it has no proper two-sided ideals and is not a nil ring then the main result of this paper may be stated as follows: A simple alternative ring is either a Cayley-Dickson algebra or associative. Thus it would seem that the distinction between alternative and associative rings is really insignificant. Besides the machinery developed in [3], a new identity plays a vital part in the argument. This identity asserts that in any alternative ring fourth powers of commutators associate with any pair of elements of the ring. In the original version of the author's argument this identity was proved with the additional assumption of simplicity. Thanks to R. H. Bruck, who modified that argument, the hypothesis of simplicity is now superfluous. Such an identity is likely to be a useful device in the study of general alternative rings. The next stepping stone is an adaptation of A. A. Albert's result [21, in the form of Theorem 2.8 of this paper. It is used to reduce the proof of the main theorem to a consideration of simple alternative rings in which the fourth power of every commutator is zero. Such rings have no nilpotent elements, from which one infers that they are fields. <s> BIB009 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> for all w, x, y and showed by example that (1.1) can fail to hold. Prior to this, Kleinfeld [1 ] generalized the Skornyakov theorem in another direction by assuming only the absence of one sort of nilpotent element. We now specify Kleinfeld's result in detail. Let F be the free nonassociative ring generated by xi and x2 and suppose that R is any right alternative ring. Kleinfeld calls t, u, v in R an alternative triple if (i) there exist elements a [xl, X2], 3 [Xl, x2], 7Y [xl, X2] in F and elements ri, r2 in R such that t =a [ri, r2 ], u =-1 [ri, r2], v = y [ri, r2] and (ii) if si and S2 are elements from an arbitrary alternative ring, and if t'=a [sl, S2], u'=3[sl, S2], v'-=Y[sl, S2], then (t', u', v') =0. The ring R is said to have property (P) if t, u, v an alternative triple in R and (t, u, V)2=0 imply (t, u, v) =0. By the definition of an alternative triple, an alternative ring has property (P). Kleinfeld's result is the converse, assuming characteristic not two; that is, a right alternative ring of characteristic not two is alternative if (and only if) it has property (P). We herein extend this line of investigation by proving that a right alternative ring of characteristic two, satisfying (1.1), is alternative if (and only if) it has property (P). The methods are mainly those used in [2], coupled with two essential lemmas (numbered 4 and 5 in our paper) due to Kleinfeld. Following [2], we say that R is strongly right alternative if R is a right alternative ring satisfying (1.1) . Throughout the paper, R will always denote such a ring, with the additional hypothesis that R have characteristic two. <s> BIB010 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> for every a, bER. R. L. San Soucie [4] calls a nonassociative ring R strongly right alternative in case its right multiplications a': x-*xa satisfy (I) and (II). (Every right alternative ring in which 2a=0 implies a=0 is strongly right alternative just as (I) implies (II) under the same assumption in the associative case.) In a recent paper [5] we developed some identities for Jordan homomorphisms in the associative case which were extensions of those previously given. (See, especially, I. N. Herstein [1].) It turns out that the nonassociative analogues of these identities are useful in reducing strongly right alternative rings to alternative rings. To do this one invokes the property (P0) If x, y, zER and (x, y, Z)2=0, then (x, y, z) =0 provided x has one of the forms y, [y, z], [y, z]y, (y, y, z) or provided z=wy and x= (y, y, w). (Property (P0) is the operational form of E. Kleinfeld's Property (P) [3].) We give in this note a proof that (I), (II) and (P0) imply that R is an alternative ring. This result subsumes those of Kleinfeld [3] and of San Soucie [4] in this connection. Our main interest is not in the slightly greater generality of this result but rather in the method of proof. The reader will find, we hope, that our proof is straightforward and relatively brief. We have made our presentation self-contained but we have relegated some simple computations, most of which are strictly analogous to those given by us in [5], to an Appendix. <s> BIB011 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> THEOREM. A simple right alternative ring R which is not associative is alternative if and only if R has an idempotent e sub that there are no nilpotent elements in R,(e) and R,(e). Because simple alternative rings that are not associative are known to be Cayley-Dickson algebras [5] one easily sees that the condition is necessary. Most of the paper is taken up with establishing that it is also sufficient. In the process we prove the following more general result. <s> BIB012 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> An example is presented that ansevers in the negative the question of whether the square of every commutator need always lie in the nucleus. Also, we show the existence of specific nilpotent elements in the free alternative ring on four or more generators, and prove abstractly the existence of an ideal I≠0, and I2=0. <s> BIB013 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> for all elements x and a. Throughout this paper, all rings will be assumed to have characteristic prime to 2 and 3. If a right alternative ring A has an idempotent, it has an Albert [l] decomposition A = A,(e) + A,,,(e) + Ao(e), where A,(e) and A,(e) are closed under Jordan product. We show that if A has a decomposition where A,(e)+ and A,(e)+ are simple Jordan algebras and A satisfies a few other conditions, then A is alternative or A = A,(e) @ A,,(e) (direct sum). The Albert decomposition with respect to an idempotent e is useful in right alternative rings provided it can be stretched to a Peirce decomposition. This means it is necessary for (e, e, A) = 0. We simply assume our idempotent has this property. Our theorem is: <s> BIB014 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> A number of papers have appeared with the purpose of generalizing Albert’s long-standing result [l, 21 that a simple right alternative algebra of finite dimension over a field of characteristic f2 that has a unit element e and an idempotent c # e is necessarily alternative. Until now results depended on rather strong additional assumptions such as other identities [S, 14, 17, 36, 371 or internal conditions on the algebra [9, 10, 12, 15, 27, 28, 301. Essential progress has been achieved by Micheev who showed that the identity (x, x, Y)~ = 0 holds in 2-torsion free right alternative algebras [27]. This paper starts collecting information on two natural concepts in a right alternative algebra R, the submodule M generated by all alternators (x, x, y), and a new nucleus N, . The later sections deal mainly with results on simple right alternative algebras. A simple 2-torsion free right alternative algebra is either alternative, hence associative or a Cayley algebra over its center, or the following statements hold: <s> BIB015 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> The characterization by J. Levitzki of the prime radical of an associative ring R as the set of strongly nilpotent elements of R is adapted here to apply to a wide class of nonassociative rings. As a consequence it is shown that the prime radical is a hereditary radical for the class of alternative rings and that the prime radical of an alternative ring coincides with the prime radical of its attached Jordan ring. In 1951 J. Levitzki characterized the prime radical of an associative ring R as the set of all elements r E R such that every m-sequence beginning with r ends in zero [3]. An m-sequence was defined to be a sequence {ao, a1,.... an ,... } such that ai C aij1 Rai_1 for i = 1, 2 .... Recently, C. Tsai [9] has given a similar characterization for the prime radical of a Jordan ring. Here we extend this characterization to the class of all s-rings and, as a consequence, are able to show that the prime radical is a hereditary radical on the class of alternative rings (i.e., if A is an ideal of an alternative ring R, then P(A) = A n P(R)) and that P(R) = P(R+) for all 2 and 3-torsion free alternative rings R. Although it is not known whether the prime radical is hereditary on the class of all s-rings, a partial result in this direction is obtained. Recall that a not necessarily associative ring R is called an s-ring for a positive integer s if As is an ideal of R whenever A is an ideal of R (As denotes the set of all sums of products a, a2 ... aS for ai E A under all possible associations). An ideal P of R is called a prime ideal if whenever A1 A2 ... As 5 P then Ai 5 P for some i. Here A1A2 ...As denotes the product of the ideals under all possible associations. The prime radical, P(R), of R is the intersection of all prime ideals of R and can be characterized as the set of all elements r E R such that every complementary system M of R which contains r also contains 0. A set M in R is a complementary system if whenever A1,A2, ...As ares ideals of R such thatAi n M# 0for i = 1, 2, ...,s, then (A1A2 ... As) n M # 0 [6], [8], [10]. To make this article self-contained we mention the following three properties of P(R) which hold for any s-ring R. Proofs can be found in [6], [8] and [10]. (a) P(R) = 0 if and only if R contains no nonzero nilpotent ideals. (b) P(R/P(R)) = 0. Received by the editors May 29, 1975. AMS (MOS) subject classifications (1970). Primary 17D05, 17E05. <s> BIB016 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> It is known that the socle of a semiprime Goldie ring is generated by a central idempotent and that a prime Goldie ring with a nonzero socle is a simple artinian ring. We prove the extension of these results to alternative rings. We also give an analogue of Goldie's theorem for alternative rings. A Goldielike theorem was obtained earlier by the authors for noetherian alternative rings by a quite different method. <s> BIB017 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> We determine the nilpotent right alternative rings of prime power oirder pn n ≥ 4, which are not left alternative. Those which are strongly right alternative become Bol loops under the circle operation. The smallest Bol circle loop has order 16. There are six such loops, all of which appear to be new. <s> BIB018 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> ABSTRACT We study the notion of a (general) left quotient ring of an alternative ring and show the existence of a maximal left quotient ring for every alternative ring that is a left quotient ring of itself. <s> BIB019 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> In this paper we develop a Fountain–Gould-like Goldie theory for alternative rings. We characterize alternative rings which are Fountain–Gould left orders in semiprime alternative rings coinciding with their socle, and those which are Fountain–Gould left orders in semiprime artinian alternative rings. <s> BIB020 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> I n this paper we prove that if R is a semiprime and purely non-associative right alternative ring, then N = C. Also we show that the right nucleus N r = C if R is purely non-associative provided that either R has no locally nilpotent ideals or R is semi prime and finitely generated mod N r <s> BIB021 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> We introduce a notion of left nonsingularity for alternative rings and prove that an alternative ring is left nonsingular if and only if every essential left ideal is dense, if and only if its maximal left quotient ring is von Neumann regular (a Johnson-like Theorem). Finally, we obtain a Gabriel-like Theorem for alternative rings. <s> BIB022 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> Let D be a mapping from an alternative ring R into itself satisfying D(ab) = D(a)b + aD(b) for all a; b 2 R. Under some conditions on R, we show that D is additive. <s> BIB023 </s> Literature Survey on Non-Associative Rings and Developments <s> Alternative Rings (1930-2015) <s> Some properties of the right nucleus in generalized right alternative rings have been presented in this paper. In a generalized right alternative ring R which is finitely generated or free of locally nilpotent ideals, the right nucleus Nr equals the center C. Also, if R is prime and Nr  C, then the associator ideal of R is locally nilpotent. Seong Nam [5] studied the properties of the right nucleus in right alternative algebra. He showed that if R is a prime right alternative algebra of char. ≠ 2 and Right nucleus Nr is not equal to the center C, then the associator ideal of R is locally nilpotent. But the problem arises when it come with the study of generalized right alternative ring as the ring dose not absorb the right alternative identity. In this paper we consider our ring to be generalized right alternative ring and try to prove the results of Seong Nam [5]. At the end of this paper we give an example to show that the generalized right alternative ring is not right alternative. <s> BIB024
To the best of our knowledge the first detailed discussion about alternative rings was started in 1930 by the German author Zorn. An alternative ring R is defined by the system of identities: (ab)b = a(bb) (right alternativeness) and (aa)b = a(ab) (left alternativeness) for all a, b ∈ R. In 1930, Zorn BIB001 mentioned the theorem of Artin which states that every two elements of an alternative ring generate an associative sub-ring. By a result of Zorn BIB001 , it was observed that the only not associative summands permitted are merely finite Cayley-Dickson algebras (which is the first example of alternative rings) with divisors of zero. In 1933, Zorn discussed also the finite-dimensional case in alternative rings. In 1935, Moufang proved a generalization for alternative division rings: if (a, b, c) = 0, then a, b, c generate a division sub-ring which is associative. For more details regarding finite dimensional case the readers are referred to the contribution of Jacobson , Albert , Schafer BIB006 BIB007 Dubisch and Perlis . In 1943, Schafer studied the alternative division algebras of degree two which is independent of Zorn's results. In 1946, Forsythe and McCoy BIB003 gave an approach that an associative regular ring without nonzero nilpotent elements is a sub-direct sum of associative division rings is easily extendable to alternative rings. In 1947, Smiley BIB004 studied alternative regular rings without nilpotent elements and proposed an approach that every alternative algebraic algebra which has no nilpotent elements is the sub-direct sum of alternative division algebras. Kaplansky in 1947 presented many of the preliminary results which were valid at least for special alternative rings. Smiley BIB005 in 1948 studied the concept of radical of an alternative ring and discussed the radicals of infinite order algebras and was also able to show that the Jacobson's definition of the radical of an associative ring is applied to alternative ring. In 1948, Kaplansky [125] also obtained the Cayley numbers as the only not associative alternative division ring which was both connected and locally connected, and he gave a conjecture that a similar result holds in the totally disconnected, locally compact case. A ring is defined to be right alternative in case ab.b − a.bb = 0 is an identical relation in the ring. Right alternative algebras were first studied by Albert in 1949 he showed that a semi-simple, right alternative algebra over a field of characteristic 0 is alternative. In 1950, Brown and McCoy suggested that every alternative ring has a greatest regular ideal. Also in 1950, in the work of Skornyakov provided a full description of alternative but not associative division rings. He showed that each such division ring is an algebra of dimension 8 over some field. Later in 1951, Bruck and Kleinfeld BIB008 proved the result of Skornyakov , independently. In 1951, Skornyakov proposed that the study of alternative rings in general began with the study of alternative division rings, which in the theory of projective planes play the role of the so-called natural division rings of alternative. Another result concerns right alternative division rings, which are of geometrical interest since they arise as coordinate systems of certain projective planes in which a configuration weaker than desargue's is assumed to hold. In this connection Skorniakov in 1951 has made known that a right alternative division ring of characteristic not 2 is alternative. Some attention had been given to right alternative rings when Skornyakov in 1951 established the result that every right alternative division ring is alternative. In 1952, Albert proved the results for simple alternative rings and his proposed results were based on the properties which were given by Zorn BIB001 . Kleinfeld in 1953 proved that for the alternativity of a right alternative ring it is sufficient that [x, y, z] 2 = 0 implies [x, y, z] = 0. Kleinfeld BIB009 in 1953 proved that even simplicity (that is, not having two-sided ideals) of an alternative but not associative ring implies that the ring is a Cayley-Dickson algebra. In 1953, Kleineelo proved that right alternative rings without nilpotent elements are known to be alternative it follows that free right alternative rings with two or more generators have non-zero nilpotent elements. In 1955, Kleinfeld strengthened his results by proving that any alternative but not associative ring, in which the intersection of all the two-sided ideals is not a nil ideal, is Cayley-Dickson algebra over some field. Hence the class of alternative rings is much larger than the class of associative rings. San Soucie BIB010 in 1955 studied alternative and right alternative rings in characteristic 2 (2x = 0) and also proved that if R is right alternative division ring of characteristic two. Then R is alternative if and only if R satisfies w(xy − x) = (wx − y)x. In 1957, Kleinfeld proved very interesting identity: [(ab − ba) 2 , c, d](ab − ba) = 0 and he also showed that in the free alternative ring there are zero divisors. Smiley BIB011 in 1957 analyzed the proof of Kleinfeld and noticed that it is sufficient to check only these cases: x = y, x = yz − zy, x = (yz − zy)y, x = [y, y, z], or z = wy and x = [y, y, w] for some w. To study of free right alternative rings he said that it was one of the main tasks of the theory of alternative rings. In 1960, Hashimoto introduced the notion of * -modularity of right ideals of an alternative rings and showed a connection between the intersection of all the * -modular maximal right ideals and the radical SR(A) in an alternative ring A. In 1963, it was shown by Kleinfeld that in an arbitrary alternative ring the fourth power of every commutator lies in the nucleus. Also Dorofeev in 1963 proved that in a free alternative ring with six or more generators there exist elements a, b, c, d, r, s such that ((a, b)(c, d) + (c, d)(a, b), r, s) = 0. In 1965, Slater asserted that a prime alternative ring R of characteristic not 3 that is not associative can be embedded in a Cayley-Dickson algebra over the quotient field of the center of R. In 1967, Humm BIB012 discussed a necessary and sufficient condition for a simple right alternative ring to be alternative. He assumed that the characteristic is not 2 or 3 in all that follows. The treatment required an idempotent e in R and used the subspaces R 1 (e) and R 0 (e) of the Albert decomposition . In 1967, Humm and Kleinfeld BIB013 investigated that with the help of an example that square of every commutator need always lie in the nucleus. Also, they showed the existence of specific nilpotent elements in the free alternative ring on four or more generators, and proved abstractly the existence of an ideal I = 0, and I 2 = 0. Slater in 1967 in his paper on nucleus and center in Alternative rings considered R is any alternative ring, N its Nucleus and Z its center. Moreover, he investigated the natural conditions on R which were the weakest possible to ensure. Also applied the results to amplify comments by Humm and Kleinfeld work on free alternative rings and contained examples of alternative rings. Slater in 1968 discussed the ideals in semiprime alternative rings and also the results of the paper, so far concerned that a given right ideal A, did not require semiprimeness of R. In 1969, Kleinfeld worked on right alternative rings without proper right ideals he showed that a right alternative ring R without proper right ideals, of characteristic not two, containing idempotents e and 1,e = 1, such that ex = e(ex) for all x ∈ R must be alternative and hence a cayley vector matrix algebra of dimension 8 over its center. Moreover, Slater in 1969, proved the natural extension to arbitrary rings of the classical Wedderburn-Artin theorem for associative ones. Also considered the special case where R is in addition purely alternative; that is, has no nonzero nuclear ideals. He also listed virtually all the radicals that have been proposed for (alternative) rings in the literature, and showed that on the class of rings with D.C.C. they all coincide. Also he discussed analogous for arbitrary rings with D.C.C. of the classical results concerning idempotents in associative rings with D.C.C. In 1970, Slater discussed the class of admissible models. Since a prime ring need not be algebra over a field, so keeping in view, the author intended to extend the class of admissible models at least slightly. For example, the Cayley integers are a prime ring that is not Cayley-Dickson algebra, much as an integral domain is prime but need not be a field. Moreover, he defined a Cayley-Dickson ring (CD ring) R to be a ring that can be imbedded in a certain natural way in CD algebra R over the quotient field Z of the (nonzero) center Z of R. He then later said that if R is cancellative alternative but not associative (and of char = 2) then R is a CD ring such that R is a CD division algebra. The added generality in the paper comes from the fact that a prime ring may have zero divisors. If R is prime with zero divisors [and not associative, and 3R = (0)] then R will be a split CD algebra, instead of a CD division algebra. Again in 1970, Slater discussed localization results on ideals and right ideals of prime and weakly prime rings. Also he showed that if some exceptional weakly prime ring exists, then there exists an exceptional prime ring having a collection of properties which taken together. Finally, he gave examples to show that if some exceptional ring exists, then the restrictions on characteristic imposed in most of the results were not excessive. Slater in 1970 proved the natural extension to alternative rings of the classical Wedderburn-Artin theorem for semiprime associative rings, considered the extension to arbitrary alternative rings of the classical methods, as well as the secondary results of the classical associative theory. Also he discussed some parallel conditions in alternative theory to the classical connection between primitive idempotents and minimal right ideals. He also examined the relation between the present results and the classical structure theory established by Zorn. In 1970, Slater discussed that the main facts about the minimal ideals and minimal right ideals of an associative ring are well known. In this paper he also proved corresponding results for an alternative ring R. He made no restriction on the characteristic of R, but will often impose restrictions of semiprimeness type. Slater in 1971 were concerned mainly with the extension to arbitrary (alternative) rings of Hopkins theorem BIB002 that in an associative ring with D.C.C. on right ideals the (say, nil) radical is nilpotent. He also reworked and modified Zhevlakov's arguments to obtain nilpotence of S(R) without restriction on characteristic. It turns out that much of the work was done more simply by working with two-sided ideals, as opposed to the right ideals used by Zhevlakov. As a consequence, a substantial part of the work was done with the assumption of D.C.C. only on two-sided ideals, and the result on S(R) appeared as an easy corollary of this work. On the way he also improved the result that a ring R with D.C.C. on two-sided ideals any solvable ideal is nilpotent by allowing Baer-radical ideals in place of solvable ideals. In 1971, Hentzel BIB014 discussed the characteristics of right alternative rings with idempotents, he also assumed that all the rings to have characteristic prime to 2 and 3. In his paper he also used the Albert decomposition for idempotents for right alternative rings. In 1971, Kleinfeld discussed that alternative as well as Lie rings satisfy all of the following four identities : (i) (x 2 , y, z) = x(x, y, z) + (x, y, z)x,(ii) (x, y 2 , z) = y(x, y, z) + (x, y, z)y,(iii) (x, y, z 2 ) = z(x, y, z) + (x, y, z)z, (iv)(x, x, x) = 0, where the associator (a, b, c) is defined by (a, b, c) = (ab)c − a(bc). He also proved that if R is a ring of characteristic different from two and satisfies (iv) and any two of the first three identities, then a necessary and sufficient condition for R to be alternative is that whenever a, b, c are contained in a sub-ring S of R which can be generated by two elements and whenever (a, b, c) 2 = 0, then (a, b, c) = 0. Also all such division rings must be alternative and hence either Cayley-Dickson division algebras or associative. Also Kleinfeld in 1971 investigated rings R of characteristic different from two. The main results were concerned that either rings which have an idempotent e = 1, or those which have no nilpotent elements. He also proved that whenever R is simple and contains an idempotent e = 1, then R must be alternative and hence either a cayley vectormatrix algebra or associative. In 1975, Thedy in his paper BIB015 analyzed the two natural concepts in a right alternative algebra R, the sub-module M generated by all alternators (x, x, y), and a new nucleus N . The later sections of his study dealt mainly with results on simple right alternative algebras. A simple 2-torsion free right alternative algebra is either alternative, hence associative or Cayley algebra over its center. Also in 1975, the work of Hentzel was dealt with a GRA (generalized right alternative) ring R. It was shown that I is an ideal of R, that I is commutative, and that I is the sum of ideals of R whose cube is zero. This means that if R is simple, or even nil-semisimple, and then R is right alternative. Since all the hypotheses on R are consequences of the right alternative law, showing that R is right alternative is as strong a result. Also he considered that the ideal generated by each associator of the form (a, b, b) is a nilpotent ideal of index at most three. Miheev in 1975 constructed a finite-dimensional, prime, right alternative nil algebra with nilpotent heart. Thus a prime right alternative ring need not be s-prime. In 1976, Rich BIB016 discussed the characterization by Levitzkiin 1951 of the prime radical of an associative ring R as the set of strongly nilpotent elements of R was adapted to apply to a wide class of non-associative rings. As a consequence it was shown that the prime radical is a hereditary radical for the class of alternative rings and that the prime radical of an alternative ring coincides with the prime radical of its attached Jordan ring. In 1978, Rose first gave the brief introduction of Cayley-Dickson algebra. He then axiomatized split Cayley-Dickson algebras over algebraically closed fields and showed that this theory is ℵ 1 -categorical, model complete, and the model completion of the theory of Cayley-Dickson algebras and stability in alternative rings. He also generalized ℵ 0 -categoricity in associative rings to ℵ 0 -categoricity in alternative rings. In 1980, Wene in his paper characterized those associative rings with involutions in which each symmetric element is nilpotent or invertible. Analogous results were obtained for alternative rings. The restriction was further relaxed to require only that each symmetric element is nilpotent or some multiple is a symmetric idempotent. Widiger in 1983 considered the class of all alternative rings in which every proper right ideal is maximal. Moreover, he used the theory of artinian rings for his study. Kleinfeld in 1983 examined that a semiprime alternative ring can have no nonzero anti-commutative elements. However, this was not so for prime right alternative rings in general. In 1988, Essannouni and Kaidi proved the natural extension to alternative rings of the classical Goldie theorem for semiprime associative rings. In 1994, Essannounia and Kaidi BIB017 discovered that the socle of a semiprime Goldie ring is generated by a central idempotent and that a prime Goldie ring with a nonzero socle is a simple artinian ring. They also extended these results to alternative rings. They had given an analogue of Goldie's theorem for alternative rings. A Goldie like theorem was obtained earlier by the authors for noetherian alternative rings by a quite different method. Also in 1994, Kleinfeld and Smith discussed that a ring is called s-prime if the 2-sided annihilator of a nonzero ideal must be zero. In particular, any simple ring or prime (−1, 1) ring is s-prime. Also, a nonzero s-prime right alternative ring, with characteristic = 2, cannot be right nilpotent. In 2000, Goodaire BIB018 developed that for a right alternative ring R, the magma (R, •) is right alternative, that is, (x • y) • y = x • (y • y), and if R is strongly right alternative, then (R, •) is a Bol magma with neutral element 0. Moreover, in 2001, Goodaire showed that in a strongly right alternative ring with unity, it was known that if U (R) is closed under multiplication, then U (R) is a Bol loop. Kenneth Kunen and Phillips in 2005 partially answered two questions of Goodaire by showing that in a finite, strongly right alternative ring, the set of units (if the ring is with unity) is a Bol loop under ring multiplication, and the set of quasi-regular elements is a Bol loop under circle multiplication. Again in 2005, Cárdenas et.al., BIB019 studied the notion of a (general) left quotient ring of an alternative ring and showed the existence of a maximal left quotient ring for every alternative ring that is a left quotient ring of itself. In 2007, Lozano and Molina BIB020 developed a fountain Gould-like Goldie theory for alternative rings. They characterized alternative rings which were Fountain-Gould left orders in semiprime alternative rings coinciding with their socle, and those which were Fountain-Gould left orders in semiprime artinian alternative rings. Furthermore, Bharathi et al., BIB021 in 2013 proved that if R is a semiprime and purely non-associative right alternative ring, then N = C. They also showed that the right nucleus N r = C if R is purely non-associative provided that either R has no locally nilpotent ideals or R is semi-prime and finitely generated mod N r . In 2014, Cárdenas et al., BIB022 introduced a notion of left non-singularity for alternative rings and proved that an alternative ring is left non-singular if and only if every essential left ideal is dense, if and only if its maximal left quotient ring is von Neumann regular. Finally, they obtained a Gabriel-like Theorem for alternative rings. Ferreira and Nascimento BIB023 in 2014 proved the relationship between the multiplicative and the additive structures of a ring that became an interesting and active topic in ring theory. They focused their discussion on the special case of an alternative ring. In this they investigated the problem of when a derivable map must be an additive map for the class of alternative rings. Recently, in 2015, Satyanarayana et al., proved that the peculiar property of nucleus N in an alternative ring R i.e. nucleus contracts to centre C when alternative ring is octonion and nucleus expands to whole algebra when the alternative ring is associative. Also in 2015, Jayalakshmi and Latha BIB024 presented some properties of the right nucleus in generalized right alternative rings. Also they showed that in a generalized right alternative ring R which is finitely generated or free of locally nilpotent ideals, the right nucleus N r equals the center C. They also considered the ring to be generalized right alternative ring and tried to prove the results of Ng Seong-Nam . On the way they gave an example to show that the generalized right alternative ring is not right alternative.
Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> The primary aim of this paper is to study mappings J of rings that are additive and that satisfy the conditions ::: ::: $$ {\left( {{a^2}} \right)^J} = {\left( {{a^J}} \right)^2},\;{\left( {aba} \right)^J} = {a^J}{b^J}{a^J} $$ ::: ::: (1) ::: ::: Such mappings will be called Jordan homomorphisms. If the additive groups admit the operator 1/2 in the sense that 2x = a has a unique solution (1/2)a for every a, then conditions (1) are equivalent to the simpler condition ::: ::: $$ {\left( {ab} \right)^J} + {\left( {ba} \right)^J} = {a^J}{b^J} + {b^J}{a^J} $$ ::: ::: (2) ::: ::: Mappings satisfying (2) were first considered by Ancochea [1], [2](1). The modification to (1) is essentially due to Kaplansky [13]. Its purpose is to obviate the necessity of imposing any restriction on the additive groups of the rings under consideration. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> In a previous paper [4](1) we have defined a special Jordan ring to be a a subset of an associative ring which is a subgroup of the additive group and which is closed under the compositions a→a 2and (a, b)→aba. Such systems are also closed under the compositions (a, b) → ab+ba= {a, b} and (a, b, c) → abc+cba. The simplest instances of special Jordan rings are the associative rings themselves. In our previous paper we studied the (Jordan) homomorphisms of these rings. These are the mappings J of associative rings such that ::: ::: $$ {\left( {a + b} \right)^J} = {a^J} + {b^J},\;{\left( {{a^2}} \right)^J} = {\left( {{a^J}} \right)^2},\;{\left( {aba} \right)^J} = {a^J}{b^J}{a^J} $$ ::: ::: (1) ::: ::: A second important class of special Jordan rings is obtained as follows. Let \( H \) be an associative ring with an involution a → a *, that is, a mapping a→a * such that ::: ::: $$ {\left( {a + b} \right)^*} = {a^*} + {b^*},\;{\left( {ab} \right)^*} = {b^*}{a^*},\;{a^{**}} = a $$ ::: ::: (2) ::: ::: Let \( H \) denote the set of self-adjoint elements h = h *. Then is a special Jordan ring. In this paper we shall study the homomorphisms of the rings of this type. It is noteworthy that the Jordan rings of this type include those of our former paper(2). <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> Herbicidal cyanoalkanapyrazoles of the formula: (I) where N IS 3, 4, OR 5; Q is -O-CH3 or -S(O)m-CH3; where m is 0, 1 or 2; and R1 is hydrogen or methyl; V is hydrogen, fluorine or chlorine; X is fluorine, chlorine, bromine, iodine, cyano or methoxy; Y is hydrogen, fluorine, or chlorine; and Z is hydrogen or fluorine; PROVIDED THAT: <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> A conveyor mechanism for automatically removing articles, such as cheeses, floating in a liquid includes a horizontal section adapted to be positioned below the surface of the liquid and an integral upwardly inclined section. Articles that float upon the horizontal section are moved to the inclined section which lifts them out of the liquid and delivers them to an elevated point. The articles are prevented from jamming the conveyor by moving stepped walls located on either side of the horizontal section at the location where jamming is likely to occur. The stepped walls move longitudinally back and forth 180 DEG out of phase with one another to gently bump and align the articles. In the preferred embodiment, a single motor drives the conveyor and rotates cam wheels which move the stepped walls. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> The main purpose of this paper is to give an external characterization of the Levitzki radical of a Jordan ring 2f as the intersection of a family of prime ideals W. This characterization coincides with that of associative rings which was given by Babic in [1I]. Applying this characterization, it is easy to see that the Levitzki radical of a Jordan ring contains the prime radical of the same ring. For associative rings the same statement is well known, since the prime radical in associative rings is called the Baer radical. If the minimal condition on ideals holds on Jordan ring 2, then the Levitzki radical, L(2f), and the prime radical, R(2f) of 2f coincide. Throughout this paper, any Jordan ring 2f, that is a (nonassociative) ring satisfying (1) ab = ba, and (2) a2(ab) =a(a2b) for all a, b in 2t, and any of its subrings satisfy the conditions, (3) 2a = 0 implies a = 0 and (4) if a is in a subring C of 2 then there exists a unique element x in C such that 2x = a. In a Jordan ring, the following identity (*) is well known. One can find the proof in [3 ]. <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> The object of this paper is to examine some radical properties of quadratic Jordan algebras and to show that under certain conditions, R(3) = 3 nR(3) where Q3 is an ideal of a quadratic Jordan algebra 3, R(3) is the radical of Q3, and R(3) is the radical of 3. 1. Preliminaries. We adopt the notation and terminology of an earlier paper [2] concerning quadratic Jordan algebras (defined by the quadratic operators UJ) as opposed to linear Jordan algebras (defined by the linear operators Lx). Thus we have a product U,y linear in y and quadratic in x satisfying the following axioms as well as their linearizations: (UQJ I) U1=I (1 the unit); (UQJ II) U (u)y = Ux U U x; (UQJ III) UXV',X =VX ,UX (VXgz={xyz}1=UX,y). Throughout this paper 3 will denote a quadratic Jordan algebra over an arbitrary ring of scalars (D. Define a property R of a class of rings (e.g. associative rings or Jordan rings) to be a radical property it it satisfies the following three conditions [1]: (a) Every homomorphic image of an R ring is again an R ring. (b) Every ring 3 contains an R ideal R(13) which contains every other R ideal of 3. The maximal R ideal R(Q3) is called the R radical of 3. (c) For Q3 an ideal of 3, if Q3 and 3/Q3 are R rings, then so is 3. An immediate consequence of this definition is R(Q/R(Q))=O. If R(f=O, 0Q3 is said to be R semisimple. Many well-known radical properties, but not all, satisfy a further condition: (d) Every ideal of an R ring is again an R ring (i.e. property R is inherited by ideals of an R ring). If a radical property satisfies condition (d) that property is called hereditary. Received by the editors August 2, 1971. AMS 1969 subject classifications. Primary 1740. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> This volume contains the proceedings of the Third International Conference on Non-Associative Algebra and Its Applications, held in Oviedo, Spain, July 12-17, 1993. The conference brought together specialists from all over the world who work in this field. All aspects of non-associative algebra are covered. Topics range from purely mathematical subjects to a wide spectrum of applications, and from state-of-the-art articles to overview papers. This collection should point the way for further research. The volume should be of interest to researchers in mathematics as well as those whose work involves the application of non-associative algebra in such areas as physics, biology and genetics. <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> In this book, Kevin McCrimmon describes the history of Jordan Algebras and he describes in full mathematical detail the recent structure theory for Jordan algebras of arbitrary dimension due to Efim Zel'manov. To keep the exposition elementary, the structure theory is developed for linear Jordan algebras, though the modern quadratic methods are used throughout. Both the quadratic methods and the Zelmanov results go beyond the previous textbooks on Jordan theory, written in the 1960's and 1980's before the theory reached its final form. ::: ::: This book is intended for graduate students and for individuals wishing to learn more about Jordan algebras. No previous knowledge is required beyond the standard first-year graduate algebra course. General students of algebra can profit from exposure to nonassociative algebras, and students or professional mathematicians working in areas such as Lie algebras, differential geometry, functional analysis, or exceptional groups and geometry can also profit from acquaintance with the material. Jordan algebras crop up in many surprising settings and can be applied to a variety of mathematical areas. ::: ::: Kevin McCrimmon introduced the concept of a quadratic Jordan algebra and developed a structure theory of Jordan algebras over an arbitrary ring of scalars. He is a Professor of Mathematics at the University of Virginia and the author of more than 100 research papers. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Jordan Rings(1933-2011) <s> The aim of this paper is to offer an overview of the most important applications of Jordan structures inside mathematics and also to physics, up-dated references being included. For a more detailed treatment of this topic see - especially - the recent book Iordanescu [364w], where sugestions for further developments are given through many open problems, comments and remarks pointed out throughout the text. ::: Nowadays, mathematics becomes more and more nonassociative and my prediction is that in few years nonassociativity will govern mathematics and applied sciences. ::: Keywords: Jordan algebra, Jordan triple system, Jordan pair, JB-, JB*-, JBW-, JBW*-, JH*-algebra, Ricatti equation, Riemann space, symmetric space, R-space, octonion plane, projective plane, Barbilian space, Tzitzeica equation, quantum group, B\"acklund-Darboux transformation, Hopf algebra, Yang-Baxter equation, KP equation, Sato Grassmann manifold, genetic algebra, random quadratic form. <s> BIB009
In modern mathematics, an important notion is that of non-associative structure. This kind of structures is characterized by the fact the product of elements verifies a more general law than the associativity law. Jordan structures were introduced in 1932-1933 by the German physicist Pasqual Jordan (1902 Jordan ( -1980 in his algebraic formulation of quantum mechanics. The study of Jordan structures and their applications is at present a wide-ranging field of mathematical research. The systematic study and more developments of general Jordan algebras were started by Albert in 1946. One can define a Jordan ring as a commutative non-associative ring that respects the Jordan identity i.e. (xy)(xx) = x(y(xx)). In 1948, Jacobson observed that semi-isomorphisms were nothing more or less than ordinary isomorphisms of the non-associative Jordan ring determined by the given associative ring. In his paper he introduced the Jordan multiplication a.b = 1/2(ab + ba), he observed that if ordinary multiplication is replaced by this identity then one can obtained Jordan ring determined by the associative ring. He also determined the isomorphisms between any two simple Jordan rings. Jacobson in 1948 in his paper discussed about the centre of nonassociative ring i.e.; If is any non-associative ring one can defined the center of to be the totality of elements c that commute, c.a = a.c. It was also observed that if a ring contains a nilpotent element in its center then it contains a nilpotent two-sided ideal. In 1950, Jacobson and Rickart BIB001 defined a special Jordan ring to be a subset of an associative ring which is a subgroup of the additive group and which is closed under the compositions a → a 2 and (a, b) → aba. Such systems are also closed under the compositions (a, b) → ab + ba = {a, b} and (a, b, c) → abc + cba. The simplest instances of special Jordan rings were the associative rings themselves. The authors also studied the (Jordan) homomorphism of these rings. Jacobson and Rickart BIB002 in 1952 considered the set H of self-adjoint elements h = h * . Then that set H is a special Jordan ring. In this paper they studied the homomorphism of the rings of this type. They also obtained an analogue of the matrix method for the rings H. Authors proved that any Jordan homomorphism of H can be extended to an associative homomorphism of U . They also examined that this result can be extended to locally matrix rings and in this form it is applicable to involutorial simple rings with minimal one-sided ideals. On the way they obtained the Jordan isomorphisms of the Jordan ring of self-adjoint elements of an involutorial primitive ring with minimal one-sided ideals onto a second Jordan ring of the same type. However, comparatively Schafer in 1955 began the study of the class of so-called non-commutative J-rings (Jordan rings). The study of this class of rings is contained in the theory of algebras of finite dimension. For more details readers were referred to study . In 1956, Hall and Jr established the identity {aba} 2 = {a{ba 2 b}a} which hold in abstract Jordan rings. This was immediate for special Jordan rings. They examined that the identity is proved by finding a partial basis for the free Jordan ring with two generators, the basis being found for all elements of degree at most 5 and for elements of degree 4 in a and degree 2 in b. Herstein in 1957 gave us the idea of derivation of Jordan ring. He mentioned that for any associative ring A, from its operations and elements a new ring can be obtained, that is the Jordan ring of A, by defining the product in the ring to be a o b = ab + ba for all a, b ∈ A. In 1958, Shirshov has made a detailed discussion of non-associative structures including Jordan rings. He also constructed some special Jordan rings. In 1963, Brown pointed out a problem of interest in non-associative algebras, regarding the study of generalized Cayley algebras and exceptional simple Jordan algebras which were closely related to the exceptional simple Lie algebras. In his work, he defined a new class of simple non-associative algebras of dimension 56 over their centers and possessing nondegenerate trace forms, such that the derivations and left multiplications of elements of trace zero generate Lie algebras of type E 7 . Moreover, in 1964, Kleinfeld gave the concept of middle nucleus and center in simple Jordan ring. He established the result that in a simple Jordan ring of characteristic = 2 the middle nucleus and center coincide. McCrimmon BIB003 in 1966 discussed about the structure, characteristics and general theory of Jordan rings. A Jordan ring (i.e., algebra over the ring of integers) is called non-degenerate if it has no proper absolute zero divisors. He also described that a Jacobson ring is a Jordan ring such that, the descending chain condition holds for Peirce quadratic ideals, and each nonzero Peirce quadratic ideal contains a minimal quadratic ideal. These rings play a role in the Jordan theory analogous to that played by the artinian rings in the associative theory. In 1968, Tsai pointed up that there were several definitions of radicals for general non-associative rings given in literature. The u-prime radical of Brown-McCoy which was given in was similar to the prime radical in an associative ring. However, it depends on the particular chosen element u. The purpose of the paper was to project a definition for the Brown-McCoy type prime radical for Jordan rings so that the radical will be independent from the element chosen. Tsai in 1969 proved that in any Jordan ring J there exists a maximal Von Neumann regular ideal M . The existence of such an ideal in an associative ring A is a well known. In fact, M could be characterized as the set of all elements a in A such that any element in the principal ideal in A generated by a is a regular element. He also had shown that the same characterization holds for Jordan rings. Also, in 1969, McCrimmon established a self-contained proof which does not depend on the classification of simple rings. The author has taken motivation for this proof from the work of Jacobson BIB004 in which he has provided the proof in which he used the structure theory to reduce the problem to the case of simple rings, and then proceeded to check the result for each of the various types of simple rings that can occur. Furthermore, in 1970, Meyberg established a proof of Fundamental-Formula which is considered to be a very important in Jordan rings and given a comparatively short proof of Fundamental-Formula as first it was given by Jacobson BIB004 . Osborn in 1970 presented three related theorems, one on the structure of Jordan rings in which every element is either nilpotent or invertible, and two on the structure of associative rings with involution in which every symmetric element is either nilpotent or invertible. The first of these theorems was a generalization of a well-known result on the structure of Jordan algebras which stated that if each element of Jordan algebra can be expressed as the sum of a nilpotent element and a scalar multiple of 1, then the nilpotent elements of J form an ideal. Also Tsai BIB005 in 1970 analyzed that an external characterization of the Levitzki radical of a Jordan ring U as the intersection of a family of prime ideals U . He also discussed that by applying this characterization, it was easy to see that the Levitzki radical of a Jordan ring contains the prime radical of the same ring. For associative rings the same statement was well known, since the prime radical in associative rings was called the Baer radical. If the minimal condition on ideals holds on Jordan ring U , then the Levitzki radical, L(U ), and the prime radical, R(U ) of U coincide. In 1971, McCrimmon derived a general structure theory for noncommutative Jordan rings. He defined a Jacobson radical and showed it coincides with the nil radical for rings with descending chain condition on inner ideals; semisimple rings with D.C.C. were shown to be direct sums of simple rings, and the simple rings to be essentially the familiar ones. In addition he also obtained results, which seem to be new even in characteristic = 2, concerning algebras without finiteness conditions. He also showed that an arbitrary simple non-commutative Jordan ring containing two nonzero idempotent whose sum is not 1 is either commutative or quasi-associative. Erickson and Montgomery in 1971 observed the special Jordan ring R + , and when R has an involution and R is associative ring, the special Jordan ring S of symmetric elements. They first showed that the prime radical of R equals the prime radical of R + , and that the prime radical of R intersected with S is the prime radical of S. Also they gave an elementary characterization, in terms of the associative structure of R, of primeness of S. Finally, they proved that a prime ideal of R intersected with S is a prime Jordan ideal of S. Also, in 1971, Shestakov considered the class of non-commutative Jordan rings. This class generalized the class of rings introduced by Block and Thedy . Also he demonstrated, for rings of the given class, a theorem on nilpotency of null rings with a maximality condition for sub-rings and for anti-commutative rings satisfying the third Engel condition . Moreover, he generalized nilpotency of finite-dimensional null algebras of the corresponding classes. Also shown that two sufficiently broad subclasses of the class of rings considered, there exists a locally nilpotent radical. He also considered finite-dimensional non-commutative Jordan algebra. In 1972, Lewand BIB006 examined some radical properties of quadratic Jordan algebras and showed that under certain conditions an ideal of a quadratic Jordan algebra is the radical. In 1973, Britin restricted his attention to the Jordan ring of symmetric elements of an associative ring with involution. Although he considered the problem of integral domains in this restricted case and his main result was more general. He used the approach via Goldie's theorem for associative rings i.e.; T has a ring of quotients which is semi-simple Artinian if and only if T is semi-prime, contains no infinite direct sum of left ideals and satisfies A.C.C. on left annihilator ideals. He observed that if one replaced semi-prime by prime, then replaced semi-simple by simple. Then it can be shown that the conditions put on left ideals are implied by A.C.C. or D.C.C. on left ideals, when T has an involution. In 1974, Britin he obtained a Jordan ring of quotients for H(R) by observing that if R be a 2-torsion free semiprime associative ring with involution. Conditions are put on the Jordan ring H(R) of symmetric elements which imply the existence of a ring of quotients which is a direct sum of involution simple artinian rings. Montgomery in 1974 studied the concept of quotient rings in a special class of Jordan rings. It is worth mentioning that this concept was not developed in Jordan algebra before. In his work, he showed that if R is an associative ring with involution and J is a Jordan sub-ring of the symmetric elements containing the norms and traces of R, then if J is a Jordan domain with the common multiple property, J has a ring of quotients which is Jordan division algebra. Also, Ng Seong-Nam [211] in 1974 generalized the result of Osborn which was basically proved for associative rings with involution. But Seong-Nam generalized the result for non-associative Jordan rings with involution. In addition, Loustao in 1974 established some results regarding radical extensions of Jordan rings. Along the way, he proved analogies for Jordan rings of commutativity results for associative rings found in . Further, he also extended commutativity results from to associative division algebras with involution whose symmetric elements are a radical extension of a commutative sub-algebra. In 1979, Petersson completed the solution of the classification problem for locally compact Jordan division rings initiated in . He also examined that a locally compact non-discrete Jordan division ring and a finite dimensional Jordan division algebra over that field. He also considered the centroid of a locally compact non-discrete field. Moreover, in 1986, Slinko in his article described the structure of a connected component of a locally compact alternative or Jordan ring. It was shown that each locally compact semiprime alternative or Jordan ring is a topological direct sum of its zero connected component, which is a semisimple finite-dimensional algebra over R and a totally disconnected locally compact semiprime ring. This result can be viewed as a far reaching generalization of the classical Pontryagin theorem on connected associative locally compact skew fields. Furthermore, it was also proved that a connected locally compact alternative or Jordan ring having no nonzero idempotents is nilpotent and also established that the quasi-regular radical of an alternative or Jordan locally compact ring is closed. In 1986, Gonzalez et al., introduced the order relation in Jordan rings, he proved that the relation ≤ defined by x ≤ y if and only if xy = x 2 , x 2 y = xy 2 = x 3 is an order relation for a class of Jordan rings and proved that a Jordan ring R is isomorphic to a direct product of Jordan division rings if and only if ≤ is a partial order on R such that R is hyperatomic and orthogonally complete. Later, in 1987, Garijo discussed the Jordan regular ring associated with finite JBW-algebra. In this paper, he showed that every finite JBW-algebra A is contained in a Von Neumann regular Jordan ring A such that A has no new idempotents. Moreover, he proved that every finite JBW-algebra has the common multiple property (non-associative analogous to the Ore condition) and that a is the (unique) total ring of quotients of A. Hentzel and Peresi [84] in 1988 introduced almost Jordan rings. He proved that any Jordan ring with characteristic = 2, 3 satisfies the identity: 2((ax)x)x + a((xx)x) = 3(a(xx))x along with commutativity implies the Jordan identity in any semiprime ring. In 1988, Slinko generalized the result of Petersson that any continuous Jordan division ring is finite-dimensional over its centroid. Secondly, he proved the condition of the solvability of the equations xU a = b, for a = 0. These conditions were actually required for the definition of Jordan division ring. In 1993, Chuvakov proved that in the class non-commutative Jordan rings satisfying the identity ([x, y] , z, z) = 0 for an arbitrary radical r, any ideal of an r-semisimple ring is r-semisimple. Thus the problem of heredity of a radical r in the class is equivalent to the problem of r-radicality of any ideal of an r-radical ring. He also proved that in the class of non-commutative Jordan rings M a locally-nilpotent radical is hereditary. For more and intrinsic study the readers are referred to the excellent books by Braun and Koecher [14] in 1966, Jacobson BIB004 in 1968 and McCrimmon BIB008 in 2004, on Jordan algebras which contain substantial material on general non-associative algebras. Also some relative research work can be found in the Proceedings of the international conferences on non-associative algebra and its applications BIB007 BIB007 . In 2011, Radu BIB009 gave us an overview of the most important applications of Jordan structures inside mathematics and also to the physics. Nowadays, mathematics becomes more and more non-associative and the author predicts in his paper that in few years non-associativity will govern mathematics and applied sciences.
Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> A shaped charge liner is made by forming a pair of hollow conical sub-liners, without any uncontrolled residual tangential shear stress (e.g. by a deep drawing method) so that they mate together to form a single conical liner. One sub-liner is inserted in the other and then one pair of subliner ends are locked together. Thereafter, the sub-liners are counter-rotated about their attachment point and locked together at their other pair of ends to retain the counter-rotation. This counter rotation generates opposite residual tangential shear stresses in the two sub-liners which may or may not be equal to each other. <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> A waterproof pressure sensitive adhesive laminate is provided in which a flexible plastics backing sheet is coated with a bituminous adhesive composition containing a minor proportion of rubber or thermoplastic polymer. The backing sheet is reinforced with a mesh or a woven or non-woven fabric which is embedded in the sheet and provides substantial resistance to stretching. <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Let L be a loop, written multiplicatively, and F an arbitrary field. Define multiplication in the vector space A, of all formal sums of a finite number of elements in L with coefficients in F, by the use of both distributive laws and the definition of multiplication in L. The resulting loop algebra A(L) over F is a linear nonassociative algebra (associative, if and only if L is a group). An algebra A is said to be power associative if the subalgebra FI[x] generated by an element x is an associative algebra for every x of A. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> In this paper, we show that certain well known theorems concerning units in integral group rings hold more generally for integral loop rings which are alternative. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> This paper first settles the “isomorphism problem” for alternative loop rings; namely, it is shown that a Moufang loop whose integral loop ring is alternative is determined up to isomorphism by that loop ring. Secondly, it is shown that every normalized automorphism of an alternative loop ringZ L is the product of an inner automorphism ofQ L and an authomorphism ofL. <s> BIB005 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Let ZL denote the integral alternative loop ring of a finite loop L. If L is an abelian group, a well-known result of G. Higman says that ±g,g € L are the only torsion units (invertible elements of finite order) in ZL . When L is not abelian, another obvious source of units is the set ±y~l gy of conjugates of elements of L by invertible elements in the rational loop algebra QL . H. Zassenhaus has conjectured that all the torsion units in an integral group ring are of this form. In the alternative but not associative case, one can form potentially more torsion units by considering conjugates of conjugates V^\y7(g7l)V\ and so forth. In this paper we prove that every torsion unit in an alternative loop ring over Z is ± a conjugate of a conjugate of a loop element. <s> BIB006 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> In this paper, the authors continue their investigation of loops which give rise to alternative loop rings. If the coefficient ring has characteristic 2, these loops turn out to form a surprisingly wide class, in contrast to the situation of characteristic ≠ 2. This paper describes many properties of this class, includes diverse examples of Moufang loops which are united by the fact that they have loop rings which are alternative, and discusses analogues in loop theory of a number of important group theoretic constructions. <s> BIB007 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> The purpose of this paper is to exhibit a class of loops which have strongly right alternative loop rings that are not alternative.. <s> BIB008 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Contents. Preface. Introduction. I. Alternative Rings. Fundamentals. The real quaternions and the Cayley numbers. Generalized quaternion and Cayley-Dickson algebras. Composition algebras. Tensor products. II. An Introduction to Loop Theory and to Moufang Loops. What is a loop? Inverse property loops. Moufang loops. Hamiltonian loops. Examples of Moufang loops. III. Nonassociative Loop Rings. Loop rings. Alternative loop rings. The LC property. The nucleus and centre. The norm and trace. IV. RA Loops. Basic properties of RA loops. RA loops have LC. A description of an RA loop. V. The Classification of Finite RA loops. Reduction to indecomposables. Finite indecomposable groups. Finite indecomposable RA loops. Finite RA loops of small order. VI. The Jacobson and Prime Radicals. Augmentation ideals. Radicals of abelian group rings. Radicals of loop rings. The structure of a semisimple alternative algebra. VII. Loop Algebras of Finite Indecomposable RA Loops. Primitive idempotents of commutative rational group algebras. Rational loop algebras of finite RA loops. VIII. Units in Integral Loop Rings. Trivial torsion units. Bicyclic and Bass cyclic units. Trivial units. Trivial central units. Free subgroups. IX. Isomorphisms of Integral Alternative Loop Rings. The isomorphism theorem. Inner automorphisms of alternative algebras. Automorphisms of alternative loop algebras. Some conjectures of H.J. Zassenhaus. X. Isomorphisms of Commutative Group Algebras. Some results on tensor products of fields. Semisimple abelian group algebras. Modular group algebras of abelian groups. The equivalence problem. XI. Isomorphisms of Loop Algebras of Finite RA Loops. Semisimple loop algebras. Rational loop algebras. The equivalence problem. XII. Loops of Units. Reduction to torsion loops. Group identities. The centre of the unit loop. Describing large subgroups. Examples. XIII. Idempotents and Finite Conjugacy. Central idempotents. Nilpotent elements. Finite conjugacy. Bibliography. Index. Notation. <s> BIB009 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> An RA loop is a loop whose loop rings, in characteristic different from 2, are alternative but not associative. In this paper, we show that every finite subloop H of normalized units in the integral loop ring of an RA loop L is isomorphic to a subloop of L. Moreover, we show that there exist units -yi in the rational loop algebra QL such that y ( ( 1 (.. 1 'H-l)'Y2) ... )-y-Yk C L. Thus, a conjecture of Zassenhaus which is open for group rings holds for alternative loop rings (which are not associative). <s> BIB010 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> In this note, the authors offer a specific construction of loops whose loop rings are right, but not left, alternative. <s> BIB011 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Abstract G. Higman has proved a classical result giving necessary and sufficient conditions for the units of an integral group ring to be trivial. In this paper we extend this result to loop rings of some diassociative loops. <s> BIB012 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> We prove the isomorphism problem for integral loop rings of finitely generated RA loops using a decomposition of the loop of units. Also we describe the finitely generated RA loops whose loops of units satisfy a certain property. <s> BIB013 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> The right alternative law implies the left alternative law in loop rings of characteristic other than 2. We also exhibit a loop which fails to be a right Bol loop, even though its characteristic 2 loop rings are right alternative. <s> BIB014 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Given a loopL and a commutative associative ringR with 1, one forms the loop ring RL just as one would form a group ring if L were a group. The theory of group rings has a long and rich history. In this paper, we sketch the history of loop rings which are not associative from early results of R. H. Bruck and L. J. Paige through the more recent discovery of alternative and right alternative rings and the work of O. Chein, D. A. Robinson and the author. 1. Origins Denition 1.1. A loop is an algebraic structureNL; O with a two-sided identity element such <s> BIB015 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> It is observed that the additive as well as multiplicative Jordan decompositions hold in alternative loop algebras of finiteRA loops and theRA loops for which the additive Jordan decomposition holds in the integral loop ring are characterized. Multiplicative Jordan decomposition (MJD) inZL, whereL is a finiteRA loop with cyclic centre is analysed, besides settling MJD for integral loop rings of allRA loops of order ≤32. It is also shown that for any finiteRA loopL,U (ZL) is an almost splittable Moufang loop. <s> BIB016 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Let L be an RA loop, that is, a loop whose loop ring in any characteristic is an alternative, but not associative, ring. We find necessary and sufficient conditions for the (Moufang) unit loop of RL to be solvable when R is the ring of rational integers or an arbitrary field. <s> BIB017 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Disclosed herein is an exhaust system for a two-cycle internal combustion engine including a rotatable crankshaft, first and second cylinders firing 180 DEG apart, and first and second exhaust ports communicating with the first and second cylinders respectively, the exhaust system comprising a substantially Y-shaped hollow exhaust pipe having first, second and third branches each having an open end and each being substantially equal in length to the distance an acoustical wave will travel through the exhaust pipe during an interval over which the crankshaft rotates through substantially ten to twenty degrees of rotation at a predetermined engine speed, the open ends of the first and second branches being adapted to be coupled to the exhaust ports of the first and second cylinders respectively. <s> BIB018 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Let L be an RA loop, that is, a loop whose loop ring in any characteristic is an alternative, but not associative, ring. Let f : L → {±1} be a homomorphism and for α = ∑αll in the integral loop ring ZL, define αf = ∑αlf(l)l-1. A unit u ∈ ZL is said to be f-unitary if uf = ±u-1. The set of all f-unitary units is a subloop of , the loop of all units in ZL. In this paper, we find necessary and sufficient conditions for to be normal in . <s> BIB019 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> Abstract Possession of a unique nonidentity commutator/associator is a property that dominates the theory of loops whose loop rings, while not associative, nevertheless satisfy an “interesting” identity. For instance, until now, all loops with loop rings satisfying the right Bol identity (such loops are called SRAR) have been known to have this property. In this paper, we present various constructions of other kinds of SRAR loops. <s> BIB020 </s> Literature Survey on Non-Associative Rings and Developments <s> Loop Rings (1944-2015) <s> The existence of loop rings that are not associative but which satisfy the Moufang or Bol identities is well known. Here we complete work started 25 years ago by establishing the existence of loop rings that satisfy any identity of "Bol–Moufang" type (without being associative). As it turns out, with one exception, loop rings satisfying an identity of Bol–Moufang type all satisfy a Moufang or Bol identity. We also highlight some similarities and differences in the consequences of several Bol–Moufang identities as they apply to loops and rings. <s> BIB021
Historically, the concept of a non-associative loop ring according to our knowledge was first introduced in a paper by Bruck in 1944 BIB002 . Non-associative loop rings appeared to have been little more than a curiosity until the 1980s when the author found a class of non-associative Moufang loops whose loop rings satisfy the alternative laws. One can defined loop ring as given a loop L and a commutative associative ring R with 1, one forms the loop ring RL just as one would form a group ring if L were a group. In the construction of RL the binary operations addition "+" and multiplication "." are defined as follows In 1946, Bruck revealed that the group ring result about the centre had a natural extension and he established a result regarding the centre of loop algebra, i.e.; the centre of loop algebra is spanned by conjugacy class sums. He also proved that a loop RL is associative (commutative) if and only if L is associative (commutative). In 1955, Paige BIB003 gave a striking example of phenomenon that the associative and commutative identities are very special, however in general, an identity in L does not lift to RL and an identity on RL imposes much more than simply the same identity on L. He also proved that if R is a ring of characteristic relatively prime to 30 and L is a loop such that RL is commutative and power associative, then L is a group. In 1959, Hall did an excellent work on right Moufang loops. A Moufang loop is a loop which satisfies this right Moufang identity: ((xy)z)y = x(y(zy)). The Moufang identity is named for Ruth Moufang who discovered it in some geometrical investigations in the first half of this century . Later, in 1974, Chein discovered that any group is a Moufang loop, but here is a family of Moufang loops which are not associative. In 1983, Goodaire BIB009 proved that if the Moufang identity on L extends to a loop ring RL, then RL must be an alternative ring. In 1985, Chein and Goodaire presented the method of constructing all RA loops, one which begins with the class of abelian groups possessing 2-torsion. They further determined when two RA loops constructed by this method are isomorphic. In particular, they determined when two non-isomorphic groups with property LC can both be embedded as index two sub-loops in the same RA loop. Subsequently, in 1986, Goodair and Chein worked with collaboration and yielded more satisfying information about RA (right alternative) loops. Soon after, Goodaire and Parmenter BIB004 in 1986 demonstrated that the certain well known theorems concerning units in integral group rings holds more generally for integral loop rings which are alternative. Afterwards, in 1987, Goodaire and Parmenter endeavored to establish conditions which guarantee the semi-simplicity of alternative loop rings with respect to any nil radical and with respect to the Jacobson radical. In 1988, Goodaire and Milies BIB005 first suggested to settle the isomorphism problem for alternative loop rings, it was shown that a Moufang loop whose integral loop ring is alternative is determined up to isomorphism by that loop ring. Secondly, it was shown that every normalized automorphism of an alternative loop ring ZL is the product of an inner automorphism of QL and an automorphism of L. Additionally, in 1989, Goodaire and Milies BIB006 established that every torsion unit in an alternative loop ring over Z is ± a conjugate of a conjugate of a loop element. They also assumed that ZL denotes the integral alternative loop ring of a finite loop L. It is a well-known result of Higman BIB001 that if L is an abelian group then ±g, g ∈ L are the only torsion units (invertible elements of finite order) in ZL. When L is not abelian, another obvious source of units is the set ±γ −1 gγ of conjugates of elements of L by invertible elements in the rational loop algebra QL. In the alternative but not associative case, one can form potentially more torsion units by considering conjugates of conjugates γ −1 2 gγ 2 )γ 1 and so forth. Furthermore, Chein and Goodaire BIB007 in 1990 continued their investigation of loops which gave rise to alternative loop rings. If the coefficient ring has characteristic2, these loops turn out to form a surprisingly wide class, in contrast to the situation of characteristic = 2. This paper described many properties of this class, includes diverse examples of Moufang loops which were united by the fact that they had loop rings which were alternative, and discussed analogues in loop theory of a number of important group theoretic constructions. In 1992, Vasantha Kandasamy introduced a new notion in loop rings KL called normal elements of the loop ring KL. An element x ∈ KL is called a normal element of KL if αKL = KLα. If every element of KL is a normal element of KL and called KL the normal loop ring, also defined normal sub loop rings. Vasantha Kandasamy in 1994 investigated a notion called strict right loop ring. He defined that if L be a loop and R a commutative ring with 1. The loop ring RL is called the strict loop ring if the set of all ideals of RL is ordered by inclusion. He also gave a class of loop rings, which were not strict loop rings. Moreover, Goodaire and Robinson BIB008 in 1994 exhibited a class of loops which have strongly right alternative loop rings that are not alternative. And they also proved fundamental propositions which generalized the necessary and sufficient conditions for a loop to have a strongly right alternative loop ring. Beside this in 1995, Vasantha Kandasamy studied the mod p envelope of associative structure. He discussed the case of non-associative groups which were loops. That is in his study he replaced groups by loops. Again in 1995, Goodaire and Milies further generalized and discussed few examples of Moufang loops whose loop rings are alternative, but not associative BIB009 . Since that time, there had been a great deal of work devoted to the study of such loops and to their loop rings. In their paper authors gave a brief discussion of those loops whose loop rings are alternative. In 1996, Goodaire and Milies BIB010 considered an RA loop is a loop whose loop rings, in characteristic different from 2, are alternative but not associative. Moreover, authors showed that every finite sub-loop H of normalized units in the integral loop ring of an RA loop L is isomorphic to a sub-loop of L. They also showed that there exist units in the rational loop algebra. Thus, a conjecture of Zassenhaus which was open for group rings holds for alternative loop rings (which were not associative). In addition to this Goodaire and Robinson BIB011 in 1996 proposed the construction of loops L which have right alternative loop rings RL which were not left alternative. The construction generated loop rings RL which are Bol and hence, right alternative merely set z = 1e in the Bol identity (xy.z)y = x(yz.y). Such loop rings are called strongly right alternative as they satisfied the more stringent condition. Barros and Juriaans BIB012 in 1996 discussed that Higman has proved a classical result giving necessary and sufficient conditions for the units of an integral group ring to be trivial. In this paper authors extended this result to a bigger class of diassociative loops which includes abelian groups, groups with a unique non-identity commutator, RA loops, and other classes of loops. Again in 1997, Barros and Juriaans BIB013 proved the isomorphism problem for integral loop rings of finitely generated RA loops using a decomposition of the loop of units. Also they described the finitely generated RA loops whose loops of units satisfy a certain property. In 1998, Kunen BIB014 discussed that the right alternative law implies the left alternative law in loop rings of characteristic other than 2. He also exhibited a loop which failed to be right Bol loop, even though its characteristic 2 loop rings are right alternative. Also in 1999, Goodaire BIB015 sketched the brief history of loop rings which were not associative from early results of Bruck and Paige through the more recent discovery of alternative and right alternative rings and the work of Chein, Robinson and by the Goodaire. In 2001, Bhandari and Kaila BIB016 observed that the additive as well as multiplicative Jordan decompositions hold in alternative loop algebras of finite RA loops and the RA loops for which the additive Jordan decomposition holds in the integral loop ring were characterized. Multiplicative Jordan decomposition (MJD) in ZL, where L is a finite RA loop with cyclic centre is analyzed, besides settling MJD for integral loop rings of all RA loops of order ≤ 32. It was also shown that for any finite RA loop L, µ(ZL) is an almost splittable Moufang loop. Again in 2001, Goodaire and Milies BIB017 considered L be an RA loop, that is a loop whose loop ring in any characteristic is an alternative, but not associative ring. They also investigated necessary and sufficient conditions for the (Moufang) unit loop of RL to be solvable when R is the ring of rational integers or an arbitrary field. On the way Goodaire and Milies BIB018 in 2001 observed that an RA loop has a torsion-free normal complement in the loop of normalized units of its integral loop ring. They also examined whether an RA loop can be normal in its unit loop. Furthermore, in 2002, Nagy showed that the fundamental ideal of loop ring F L is nilpotent if and only if the multiplication group is p-group, where p is prime, L is finite loop of p-power order and F is a field of characteristic p. BIB019 discussed normality of f -unitary units in an alternative loop rings. In this paper, they also found necessary and sufficient conditions for U f (ZL) to be normal in U (ZL) (the loop of all units in ZL) where for U f (ZL) the set of all f -unitary units and U (ZL) is the loop of all units in ZL. Goodaire in 2007 described some of the advances in the theory of loops whose loop rings satisfy interesting identities. He wrote this paper in memory of his friend Robinson with whom he did research. Again in 2007, Goodaire discussed advances in the theory of loops whose loop rings satisfy interesting identities that had taken place primarily since 1998. The major emphasis were on Bol loops that had strongly right alternative loop rings and on Jordan loops a hitherto largely ignored class of commutative loops some of whose loops rings satisfy the Jordan identity (x 2 y)x = x 2 (yx). He raised a number of open questions and includes several suggestions for further research. Doostie and Pourfaraj [40] in 2007 studied the finite rings , and proved that the first one is commuting regular and the second ring contains the commuting regular element and idempotents as well (where p, p 1 and p 2 are odd primes. Moreover, i, m and n are positive integers such that m < n, (m, n) = 1 and (m − 1, n) = 1. They also defined the commuting regular semigroup ring, commuting regular loop ring and commuting regular groupoid ring. In 2008, Chein et al., established some connections between loops whose loop rings, in characteristic 2, satisfy the Moufang identities and loops whose loop rings, in characteristic 2, and satisfy the right Bol identities. Again in 2008, Chein and Goodaire BIB020 discussed that the possession of a unique non-identity commutator or associator was a property that dominates the theory of loops whose loop rings, while not associative, nevertheless satisfy an interesting identity. Furthermore, they also considered all loops with loop rings satisfying the right Bol identity (such loops are called SRAR) have been known to have this property. They presented various constructions of other kinds of SRAR loops. Also considered Bol loops whose left nucleus is an abelian group of index 2 and showed that the loop rings of some such loops were strongly right alternative and exhibited various SRAR loops with more than two commutators. In 2009, Dart and Goodaire BIB021 investigated the existence of loop rings that were not associative but which satisfied the Moufang or Bol identities (without being associative). Their work turned out, with one exception, loop rings satisfying an identity of Bol-Moufang type all satisfy a Moufang or Bol identity. They also highlighted some similarities and differences in the consequences of several Bol-Moufang identities as they applied to loops and rings. Moreover, in 2012, Giraldo Vergara discussed in details the developments of theory of loop rings that has been intrigued mathematicians from different areas. He also mentioned that in recent years, this theory has been developed largely, and as an example of this the complete description of the loop of invertible elements of the Zorn algebra is known to us. Recently, in 2014, Jayalakshmi and Manjula investigated the case where the ring has characteristic 2 and extend to alternative loop rings by proving that the augmentation of order 2n in characteristic 2 is a nilpotent ideal (of dimension 2n − 1). This, of course, means that virtually all the familiar radicals of alternative rings coincide with the augmentation ideal. Also, in 2014, Jayalakshmi and Manjula discussed that the right alternative law implies the left alternative law in loop rings of characteristic other than 2. They also shown that there exists a loop which fails to be an extra loop, even though its characteristic 2 loop rings are right alternative.
Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> In this paper we give the notion of near left almost ring (ab- breviated as nLA-ring) (R, +, ·), i.e. (R, +) is an LA-group, (R, ·) is an LA- semigroup and one distributive property of '·' over '+' holds, where both the binary operations "+" and "·" are non-associative. An nLA-ring is a general- ization of an LA-ring and footed parallel to the near ring. Mathematics Subject Classification: 16A76, 20M25, 20N02 <s> BIB001 </s> Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> S.J. Choi, P. Dheena and S. Manivasan studied property of quasi- ideals of P-regular nearring. In this page we study property of quasi-ideals of P-regular nLA-ring. <s> BIB002 </s> Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> In this paper, we study left ideals, left primary and weakly left primary ideals in LA-rings. Some characterizations of left primary and weakly left primary ideals are obtained. Moreover, we investigate relationships left primary and weakly left primary ideals in LA-rings. Finally, we obtain necessary and sufficient conditions of a weakly left primary ideal to be a left primary ideals in LA- rings. <s> BIB003 </s> Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> The aim of this paper is to characterize left almost rings by congrunces. We show that each homomophism of left amost rings defines a congrucne relation on left almost rings. We then discuss quotient left almiost rings. At the end we prove analogues of the ismorphism theorem for left almost rings. <s> BIB004 </s> Literature Survey on Non-Associative Rings and Developments <s> LA-Ring (2006-2016) <s> Molodtsov developed the theory of soft sets which can be seen as an effective tool to deal with uncertainties. Since the introduction of this concept, the application of soft sets has been restricted to associative algebraic structures (groups, semigroups, associative rings, semi-rings etc). Acceptably, though the study of soft sets, where the base set of parameters is a commutative structure, has attracted the attention of many researchers for more than one decade. But on the other hand there are many sets which are naturally endowed by two compatible binary operations forming a non-associative ring and we may dig out examples which investigate a non-associative structure in the context of soft sets. Thus it seems natural to apply the concept of soft sets to non-commutative and non-associative structures. In present paper, we make a new approach to apply Molodtsov’s notion of soft sets to LA-ring (a class of non-associative ring). We extend the study of soft commutative rings from theoretical aspect. <s> BIB005
After the concept of loop rings (1944), a new class of non-associative ring theory was given by Yusuf in 2006 . Although the concept of LA-ring was given in 2006, but the systematic study and further developments was started in 2010 by Shah and Rehman in their paper . It is worth mentioning that this new class of non-associative rings named Left almost rings (LA-ring) is introduced after a huge gap of 6 decades since the introduction of loop rings. Left almost rings (LA-ring) is actually an off shoot of LA-semigroup and LA-group. It is a noncommutative and non-associative structure and gradually due to its peculiar characteristics it has been emerging as useful non-associative class which intuitively would have reasonable contribution to enhance non-associative ring theory. By an LA-ring, we mean a non-empty set R with at least two elements such that (R, +) is an LA-group, (R, .) is an LA-semigroup, both left and right distributive laws hold. In , the authors have discussed LA-ring of finitely nonzero functions which is in fact a generalization of a commutative semigroup ring. They generalized the structure of commutative semigroup ring (ring of semigroup S over ring R represented as R[X; S] to a nonassociative LA-ring of commutative semigroup S over LA-ring R represented as R[X s ; s ∈ S], consisting of finitely nonzero functions. Nevertheless it also possesses associative ring structures. Furthermore they also discussed the LA-ring homomorphism. On the way the first ever definition of LA-module over an LA-ring was given by Shah and Rehman in the same paper . Later in 2010, Shah et al., introduced the notion of topological LA-groups and topological LA-rings which are some generalizations of topological groups and topological rings respectively. They extended some characterizations of topological groups and topological rings to topological LA-groups and topological LA-rings. In 2011, Shah and Shah established some basic and structural facts of LA-ring which will be useful for future research on LA-ring. They studied basic results such as if R is an LA-ring then R cannot be idempotent and also (a + b) 2 = (b + a) 2 for all a, b ∈ R. If LA-ring R has left identity e then e + e = e, e + 0 = e and e = (e + 0) 2 . If R is a cancellative LA-ring with left identity e then e + e = 0 and thus a + a = 0 for all a ∈ R. An interesting result is that if R is an LA-ring with left identity e then right distributivity implies left distributivity. Also in 2011, Shah et al., promoted the notion of LA-module over an LA-ring defined in and further established the substructures, operations on substructures and quotient of an LA-module by its LA-sub module. They also indicated the non similarity of an LAmodule to the usual notion of a module over a commutative ring. Moreover, in 2011, Shah, Rehman and Raees BIB001 have generalized the concept of LA-ring by introducing the notion of near left almost ring (abbreviated as nLA-ring) (R, +, ·). (R, +) is an LA-group, (R, ·) is an LA-semigroup and one distributive property of "·" over "+" holds, where both the binary operations "+" and "." are non-associative. In continuation to BIB001 , Shah, Ali and Rehman in 2011 characterized nLA-ring through its ideals. They have shown that the sum of ideals is again an ideal, and established the necessary and sufficient condition for an nLA-ring to be direct sum of its ideals. Furthermore, they observed that the product of ideals is just a left ideal. In 2012, Shah and Rehman explored some notations of ideals and M-systems in LAring. They characterized LA-rings through some properties of their ideals. Moreover, they also established that if every subtractive subset of an LA-ring R is semi-subtractive and also every quasi-prime ideal of an LA-ring R with left identity e is semi-subtractive. Also in 2012, Shah et al., investigated the intuitionistic fuzzy normal sub-rings in non-associative rings. In their study they extended the notions for a class of non-associative rings i.e.; LA-ring. They established the notion of intuitionistic fuzzy normal LA-subrings of LA-rings. Specifically they proved that if an IF SA = (µ A , γ A ) is an intuitionistic fuzzy normal LA-subring of an LA-ring R if and only if the fuzzy sets µ A andγ A are fuzzy normal LA-subrings of R. Also they showed that an IF SA = (µ A , γ A ) is an intuitionistic fuzzy normal LA-subring of an LA-ring R if and only if the fuzzy setsμ A and γ A are anti-fuzzy normal LA-subrings of R. In 2013, a notable development was done by Rehman et al., when the existence of LA-ring was shown by giving the non-trivial examples of LA-ring. The authors showed the existence of LA-ring using the mathematical program Mace4. With the existence of nontrivial LA-ring, ultimately the authors were able to abolish the ambiguity about the associative multiplication because the first example on LA-ring given by Yusuf was trivial. Also in 2013, Gaketem BIB002 studied the properties of quasi-ideals of P -regular nLA-ring which is in fact a generalization of LA-ring. In 2014, Alghamdi and Sahraoui broaden the concept of LA-module given in the paper by constructing a tensor product of LA-modules. Although, LA-groups and LA-modules need not to be abelian, the new construction behaves like standard definition of the tensor product of usual modules over a ring. They also then extended some simple results from the ordinary tensor to the new setting. In addition, Yiarayong BIB003 in 2014 studied left ideals, left primary and weakly left primary ideals in LA-rings. Some characterizations of left primary and weakly left primary ideals were obtained. Moreover, the author investigated relationships of left primary and weakly left primary ideals in LA-rings. Finally, he obtained necessary and sufficient conditions of a weakly left primary ideal to be a left primary ideal in LA-rings. Recently, in 2015, Hussain and W. Khan BIB004 characterized LA-rings by congruence relations. They had shown that each homomorphism of LA-rings defines a congruence relation on LA-rings. They also then discussed quotient LA-rings. At the end they proved analogue of the isomorphism theorems for LA-rings. Also Shah and Asima Razzaque in their paper discussed soft non-associative rings and explore some of its algebraic properties. The notions of soft M-systems, soft P-systems, soft I-systems, soft quasi-prime ideals, soft quasi-semiprime ideals, soft irreducible and soft strongly irreducible ideals were introduced and several related properties were investigated. Moreover in 2016, Shah et al., BIB005 taken a step forward to apply the concepts of soft set theory to LA-ring by introducing soft LA-rings, soft ideals, soft prime ideals, idealistic soft LA-rings and soft LA-homomorphism. They provided a number of examples to illustrate these concepts.
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> INTRODUCTION <s> Despite the ongoing discussion of the recent years, there is no agreed definition of a ‘smart city’, while strategic planning in this field is still largely unexplored. Inspired by this, the purpose of this paper was to identify the forces shaping the smart city conception and, by doing so, to begin replacing the currently abstract image of what it means to be one. The paper commences by dividing the recent history of smart cities into two large sections – urban futures and the knowledge and innovation economy. The urban futures strand shows that technology has always played an important role in forward-looking visions about the city of the future. The knowledge and innovation economy strand shows that recent technological advancements have introduced a whole new level of knowledge management and innovation capabilities in the urban context. The paper proceeds to explicate the current technology push and demand pull for smart city solutions. On one hand, technology advances rapidly and creates a booming market of <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> INTRODUCTION <s> Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. <s> BIB002
use the terms "global city region" to refer to "a new metropolitan form characterised by sprawling polycentric networks of urban centres …" Such networks are becoming identified with both the potential and the reality of 'smart' city infrastructures of connected transportation, financial, energy, health, information and cultural systems. There are numerous definitions of a "smart city" across the literature with little consensus BIB001 and there are many technical issues involved BIB002 . For the purposes of this paper, the definition provided by the (ISO/IEC 2014) is considered an appropriately inclusive one, that is: The "smartness" of a city describes its ability to bring together all its resources, to effectively and seamlessly achieve the goals and fulfil the purposes it has set itself… [It] enables the integration and interoperability of city systems in order to provide value, both to the city as a whole, and to the individual citizen. This integration further enables potential synergies to be exploited and the city to function holistically, and to facilitate innovation and growth. In the context of such values and goals, there is a global movement in the implementation of smart cities which was catalysed in the Global Forum World Foundation for Smart Communities in 1997. In particular coordinated strategies and standards for smart city implementation are increasingly pervasive and being adopted at national and industry levels. For example, the UK Department of Business, Innovation and Skills commissioned BSI in 2012 to develop a standards strategy for smart cities in order to accelerate and minimise risks in the implementation of smart cities in the UK. In 2011, the European Commission initiated the European Innovation Partnership on Smart Cities and Communities (EC 2015) . In China, comparable initiatives have been established, such as the China Strategic Alliance of Smart City Industrial Technology Innovation. In the United States, the Federal Smart Cities and Communities Task Force is seeking to embed new digital technologies into city and community infrastructures and services. The Australian government has similarly launched a national Smart Cities Plan in 2016 aimed at positioning Australian cities to succeed in the digital economy (Australian Government 2017). Among individual cities themselves, there are examples of smart city plans that are being developed at local and municipal government level. One example is the GrowSmarter (2015) initiative, a collaborative EU funded smart city project, focusing on sustainable solutions to economic, social and environmental issues. The project involves what are termed "Lighthouse Cities" of Stockholm, Cologne and Barcelona. It aims to integrate and demonstrate twelve smart solutions to energy, mobility and infrastructure in collaboration with twenty industrial partners, and importantly the project is intended to create a platform for sharing knowledge and experience. Industry involvement in smart city developments is especially key to such partnerships, and in supporting the technological enablers and connected platforms that underpin smart city infrastructures. Multinational communications and IT companies, Cisco and Nokia are among the industry players who are developing strategic White Papers about the platform components of a successful smart city, and partnering with cities on pilot implementations . Across these standards and strategies is a shared vision to position communities at all scales to have equitable access to connected smart services that can enhance the sustainability and quality of life, improved health and safety, and economic prosperity. Smart cities can help enable virtual collaboration of communities . In the context of this proposed paper, the citizens of a smart city are potential participants in its governance and in the evolving development of smarter services, including those related to accessing and preserving cultural heritage and the arts. Now, however, there are few visible examples of smart cultural initiatives integrated with smart city developments at a pilot or a conceptual level. There is consequently a need to understand how populations can be supported by local capacities and smarter cultural cities and regions, using advanced information systems, visualisation, and applications.
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> SMART CULTURAL HERITAGE <s> This paper presents the ARCHEOGUIDE project (Augmented Reality-based Cultural Heritage On-site GUIDE). ARCHEOGUIDE is an IST project, funded by the EU, aiming at providing a personalized electronic guide and tour assistant to cultural site visitors. The system provides on-site help and Augmented Reality reconstructions of ancient ruins, based on user's position and orientation in the cultural site, and realtime image rendering. It incorporates a multimedia database of cultural material for on-line access to cultural data, virtual visits, and restoration information. It uses multi-modal user interfaces and personalizes the flow of information to its user's profile in order to cater for both professional and recreational users, and for applications ranging from archaeological research, to education, multimedia publishing, and cultural tourism. This paper presents the ARCHEOGUIDE system and the experiences gained from the evaluation of an initial prototype by representative user groups at the archeological site of Olympia, Greece. <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> SMART CULTURAL HERITAGE <s> Cultural Heritage Areas together Context-Aware Systems present a great opportunity where the Ambient Intelligence (AmI) paradigm can be successfully applied. This paper deals with the design of an AmI-based Information Systems, based on NFC (Near Field Communication) technology, developed to access Cultural Heritage Areas of particular interest, in which different objects of artistic interest can be interfaced in a proper virtual way without affecting the historical environment. The application of non-invasive technology NFC improves the context-awareness of the implemented system and allows users to receive customized information in a transparent way, through the most suitable device, allowing a realistic experience. The proposed AmI-based Information System is particular related to mobile and safe cultural access in the context of Villa Mondragone, an ancient Renaissance Villa. We outline a real system, called SMART VILLA, based on a set of mobile applets, each interfaced with a NFC based subsystem, related to particular sites (SMART BIBLIO for ancient books, SMART ROOM for particular rooms and SMART GARDEN for surrounding historical gardens). <s> BIB002 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> SMART CULTURAL HERITAGE <s> Abstract In this paper, we present an ongoing project, named Talking Museum and developed within DATABENC - a high technology district for Cultural Heritage management. The project exploits the Internet of Things technologies in order to make objects of a museum exhibition able to “talk” during users’ visit and capable of automatically telling their story using multimedia facilities. In particular, we have deployed in the museum a particular Wireless Sensor Network that, using Bluetooth technology, is able to sense the surrounding area for detecting user devices’ presence. Once a device has been detected, the related MAC address is retrieved and a multimedia story of the closest museum objects is delivered to the related user. Eventually, proper multimedia recommendation techniques drive users towards other objects of possible interest to facilitate and make more stimulating the visit. As case of study, we show an example of Talking museum as a smart guide of sculptures’ art exhibition within the Maschio Angioino castle, in Naples (Italy). <s> BIB003 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> SMART CULTURAL HERITAGE <s> The relationship between cultural heritage domain and new technologies has always been complex, dialectical and often inspired by the human desire to induce these spaces not created for that purpose, to pursue technological trends, eventually offering to the end-users devices and innovative technologies that could become a ‘dead weight’ during their cultural experiences. However, by means of innovative technological applications and location-based services it is possible to shorten the distance between cultural spaces and their visitors, nowadays determined by the purely aesthetic and essentially passive fruition of cultural objects. This paper presents the design and implementation of a novel multipurpose system for creating single smart spaces , a new concept of intelligent environment, that relies on innovative sensors board named smart crickets and an ad hoc proximity strategy; by following the Internet of Things paradigm the proposed system is able to transform a cultural space in a smart cultural environment to enhance the enjoyment and satisfaction of the involved people. To assess the effectiveness of our solution, we have experienced two real case studies, the first one situated within an art exhibition indoor, and the second one concerning an historical building outdoor. In this way, technology can become a mediator between visitors and fruition, an instrument of connection between people, objects and spaces to create new social, economic and cultural opportunities. <s> BIB004
The concept of "smart cultural heritage," according to researchers of the EU funded DATABENC (Distretto ad Alta Tecnologia per i Beni Culturali) initiative, is about digitally connecting institutions, visitors, and objects in dialogue. Smart heritage focuses on adopting more participatory and collaborative approaches, making cultural data freely available (open), and consequently increasing the opportunities for interpretation, digital curation, and innovation. This offers potential and unprecedented access to cultural artefacts and experiences across distances, in which cultural consumers are no longer passive recipients BIB003 BIB002 BIB004 , Garcia-Crespo 2016 . As described in the Europeana White Paper on smart cities, "cultural heritage defines our identity and our communities. Sharing our past in smart city initiatives has the potential to promote social cohesion and increase innovation and tourism . In this way, smart cultural heritage is strongly associated with the identity of place and communities through smart technologies, knowledge and participation. It is not surprising that the cultural heritage sector has been working within smart requirements for many years due to the inseparable association with location and identity (Chianese et al. 2013) . Projects such as the prototype multimedia guide developed for the archaeological site of ancient Olympia in Greece, ARCHEOGUIDE (Augmented Reality-based Cultural Heritage On-site Guide) provided augmented reconstructions of the ancient ruins and audio information BIB001 . ARCHEOGUIDE supported a context awareness visitor application, i.e., a location-based application, in which a user's location is identified through a sensing device, and provides the user with information bound to that specific location and the physical objects in the surroundings. The development of context-aware services has been pervasive in demonstrator applications in the cultural heritage areas, not least focused on forms of digital data and user defined interactions BIB004 . With the socio-technical rise of the mobile phone, museums and galleries worldwide developed mobile apps that visitors could download onto their own device and create self-guided tours. The National Gallery in London was one of the first museums to develop LoveArt -an iPhone app launched in 2009 .
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Smart Cities and Cultural Heritage -A Review of Developments and Future Opportunities <s> Cultural Heritage Areas together Context-Aware Systems present a great opportunity where the Ambient Intelligence (AmI) paradigm can be successfully applied. This paper deals with the design of an AmI-based Information Systems, based on NFC (Near Field Communication) technology, developed to access Cultural Heritage Areas of particular interest, in which different objects of artistic interest can be interfaced in a proper virtual way without affecting the historical environment. The application of non-invasive technology NFC improves the context-awareness of the implemented system and allows users to receive customized information in a transparent way, through the most suitable device, allowing a realistic experience. The proposed AmI-based Information System is particular related to mobile and safe cultural access in the context of Villa Mondragone, an ancient Renaissance Villa. We outline a real system, called SMART VILLA, based on a set of mobile applets, each interfaced with a NFC based subsystem, related to particular sites (SMART BIBLIO for ancient books, SMART ROOM for particular rooms and SMART GARDEN for surrounding historical gardens). <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Smart Cities and Cultural Heritage -A Review of Developments and Future Opportunities <s> Cultural Heritage represents a world wide resource of inestimable value, attracting millions of visitors every year to monuments, museums and art exhibitions. A fundamental aspect of this resource is represented by its fruition and promotion. Indeed, to achieve a fruition of a cultural space that is sustainable, it is necessary to realize smart solutions for visitors' interaction to enrich their visiting experience. In this paper we present a service-oriented framework aimed to transform indoor Cultural Heritage sites in smart environments, which enforces a set of multimedia and communication services to support the changing of these spaces in an indispensable dynamic instrument for knowledge, fruition and growth for all the people. Following the Internet of Things paradigm, the proposed framework relies on the integration of a Wireless Sensor Network (WSN) with Wi-Fi and Bluetooth technologies to identify, locate and support visitors equipped with their own mobile devices. <s> BIB002 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Smart Cities and Cultural Heritage -A Review of Developments and Future Opportunities <s> The areas of application for augmented reality technology are heterogeneous ::: but the content creation tools available are usually single-user desktop ::: applications. Moreover, there is no online development tool that enables the ::: creation of such digital content. This paper presents a framework for the ::: creation of Cultural Entertainment Systems and Augmented Reality, employing ::: cloud-based technologies and the interaction of heterogeneous mobile ::: technology in real time in the field of mobile tourism. The proposed system ::: allows players to carry out a series of games and challenges that will ::: improve their tourism experience. The system has been evaluated in a real ::: scenario, obtaining promising results. <s> BIB003
Ann Borda & Jonathan P. Bowen 12 different layers of a map at various scales and across thematic layers, and to change the visual appearance of the map, e.g., Google Earth applications. Ann Borda & Jonathan P. Bowen 16 Absent among enabling technologies was the evidence of the use of cloud computing platforms, although there are proposed smart cultural frameworks in the literature that include cloud platforms BIB001 BIB003 . Across the sample studies, it was difficult to determine use of cloud infrastructure due to the lack of available technical literature on the architecture of systems. In the cultural heritage domain, the Europeana Cloud is one of the larger cloud-based infrastructure projects in operation, hosting several million digital items, and supporting data services arising from the Europeana Open Data and associated programs. IoT, as a nearly synonymous term with smart cities, remains an evolving technology, and has not reached an operational level of integration in smart cultural heritage, although there is potential for IoT to underpin various smart cultural services BIB002 . The EU funded DATABENC (2014)
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Enabling technologies <s> Invisible, attentive and adaptive technologies that provide tourists with relevant services and information anytime and anywhere may no longer be a vision from the future. The new display paradigm, stemming from the synergy of new mobile devices, context-awareness and AR, has the potential to enhance tourists’ experiences and make them exceptional. However, effective and usable design is still in its infancy. In this publication we present an overview of current smartphone AR applications outlining tourism-related domain-specific design challenges. This study is part of an ongoing research project aiming at developing a better understanding of the design space for smartphone context-aware AR applications for tourists. <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Enabling technologies <s> Augmented Reality (AR) Mobile Apps are a usefull technology for Cultural Heritage Communication. The interdisciplinary field between Computation, Interaction Design and Heritage Interpretation is allowing the development of innovative case studies. Through an online and offline observation we present a state of the art review of Augmented Reality Apps in Cultural Historical Heritage Communication placing AR as another tool in the broader context of Heritage Interpretation. <s> BIB002
At the time of this paper, there are no published standards specific to smart cultural heritage projects as there is available for smart cities, such as those developed by ISO/IEC or the IEEE Smart Cities Initiative (IEEE 2017). However, there are some advances towards developing platforms for smart cultural heritage utilising enabling technologies that underline smart city implementation. Among the enabling technologies, mobile broadband is pervasive across the case study examples in use and/or in access. The cultural heritage sector has been an early adopter of mobile technologies in user engagement and the visitor experience in the development of mobile apps. It is also the most accessible and available of the technologies to the broadest spectrum of users, irrespective of their location BIB002 BIB001 ). Wireless Sensor Networks (WSN) are another layer of infrastructure that is increasingly common supporting different smart scenarios. Smartphone tours and devices that are context-aware figure in most of the examples, such as the indoor digital trails of the O-Device and Journey of Inspiration, and city trails of Paisatge and StreetMuseum. The application of NFC technology provides a more fine-grained context-awareness that allows users to receive customised information and a more realistic experience in close proximity, e.g., users can read or listen to comprehensive guides about landmarks they discover, while watching animations or playing games. The Pen at Cooper Hewitt builds on NFC reading technology to enable personalised and individual interaction. The use of BLE enabled beacons across the Canadian Museum for Human Rights, supports a digital trail that is layered with narrative and augmented reality. 120 universal access points also provides improved visitor navigation and accessibility for sight-and hearingimpaired visitors. Sign language, for example, is available through a dedicated app.
Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Visualisation technologies <s> Abstract Museums are interested in the digitizing of their collections not only for the sake of preserving the cultural heritage, but to also make the information content accessible to the wider public in a manner that is attractive. Emerging technologies, such as VR, AR and Web3D are widely used to create virtual museum exhibitions both in a museum environment through informative kiosks and on the World Wide Web. This paper surveys the field, and while it explores the various kinds of virtual museums in existence, it discusses the advantages and limitation involved with a presentation of old and new methods and of the tools used for their creation. <s> BIB001 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Visualisation technologies <s> Invisible, attentive and adaptive technologies that provide tourists with relevant services and information anytime and anywhere may no longer be a vision from the future. The new display paradigm, stemming from the synergy of new mobile devices, context-awareness and AR, has the potential to enhance tourists’ experiences and make them exceptional. However, effective and usable design is still in its infancy. In this publication we present an overview of current smartphone AR applications outlining tourism-related domain-specific design challenges. This study is part of an ongoing research project aiming at developing a better understanding of the design space for smartphone context-aware AR applications for tourists. <s> BIB002 </s> Smart Cities and Cultural Heritage - A Review of Developments and Future Opportunities. <s> Visualisation technologies <s> It is of paramount importance that cultural heritage professionals are directly involved in the design of digitally augmented experiences in their museum spaces. We propose an approach based on a catalogue of reusable narrative and interaction strategies with step-by-step instructions on how to adapt and instantiate them for a specific museum and type of visitors. This work is conducted in the context of the European-funded project meSch. <s> BIB003
Forms of geovisualisation, from floor guides to location points and thematic maps, are pervasive and essential features of the applications and services across the selected case studies. This underpins a primary characteristic of smart environments, that of location-awareness relating to the user, place, and surrounding objects at any one time. Geovisualisation also reinforces other visualisation technologies such as AR which is bound to a location point and wayfinding activities. 3D visualisation, including computer generated objects, figured prominently in those examples with AR applications and immersive environments, such as PureLand 360 and Ai WeiWei 360, offering rich and layered forms of information. The digital 3D models in the preservation and reconstruction examples, Rekrei and Zamani, highlight that the protection of heritage and culture must remain a high priority for all cultures. These online collections of 3D reconstructions representing endangered or destroyed artefacts, cultural landmarks and monuments bring new resonance to the role that "virtual museums" can play in terms of knowledge and wider accessibility of cultural heritage BIB001 ). The worldwide engagement of thousands of users supporting Rekrei's mission, in particular, also profiles the potential role of citizens in collectively protecting global cultural heritage, and that we do not need to be physically in the same place to participate in this goal. The pervasiveness of AR and/or AR elements in the selected projects supports the adoption growth of this technology in the cultural heritage sector as a popular visualization paradigm, arising from tourism applications BIB002 ) to educational and exhibition spaces (Cassella & Coelho 2013 , Garcia-Crespo 2016 BIB003 . Museums, galleries and other cultural organisations have been trialling AR systems for several years, such as in the example of ARCHEOGUIDE, and the National Science Museum in Tokyo, in which AR technology was used to overlay "flesh" onto the dinosaur skeletons on display ). The Skin & Bones AR app at the National Museum of Natural History has advanced this use, to bring dinosaur skeletons and fossils alive through a mix of AR, animation and gamification. The Museum has also provided the opportunity for children to use the AR app at home with a downloadable resource that simulates the museum experience. The potential of AR in outdoor settings is exemplified by the Museum of London's highly successful StreetMuseum app that has been available as a downloadable app for over five years. The ROM Ultimate Dinosaur exhibition brought dinosaurs to life in the city of Toronto at bus shelters and public spaces with signposted instructions on how to activate the AR experience. RecoVR Mosul and Ai WeiWei both use AR elements in that they use the real environment as a background with overlaid information on top. The applications are themselves accessible through web browsers as photorealistic 360° panoramas, but alternatively can be experienced as 3D immersions in virtual reality (VR). The potential intersections of VR and smart environments are yet to be explored further.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> This paper presents a system-level power management technique for energy savings of event-driven application. We present a new predictive system-shutdown method to exploit sleep mode operations for energy saving. We use an exponential-average approach to predict the upcoming idle period. We introduce two mechanisms, prediction-miss correction and prewake-up, to improve the hit ratio and to reduce the delay overhead. Experiments on four different event-driven applications show that our proposed method achieves high hit ratios in a wide range of delay overheads, which results in a high degree of energy with low delay penaties. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> Corporate energy usage policy is typically difficult to design and impossible to enforce. The problem stems from the fact that there are several complexities in this enforcement and passive tools such as Energy star are naive; they do not cater for corporate policies. The result of this is an uncontrolled usage of computers in the corporate culture resulting in significant effects on the environment. This is in addition to an effect on the economy due to an increase in the corporate electricity bills. In this paper, we propose the use of a multiagent-based approach comprising of an intelligent self-organizing system managing the energy usage policy. For validation, using an agent-based model we simulate the proposed intelligent self-organizing architecture for monitoring corporate energy utilization. Extensive simulation experiments demonstrate the effectiveness of the proposed approach. <s> BIB002 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> Energy consumption of the Information and Communication Technology (ICT) sector has grown exponentially in recent years. A major component of the today’s ICT is constituted by the data centers which have experienced an unprecedented growth in their size and population, recently. The Internet giants like Google, IBM and Microsoft house large data centers for cloud computing and application hosting. Many studies, on energy consumption of data centers, point out to the need to evolve strategies for energy efficiency. Due to large-scale carbon dioxide (\(\mathrm{CO}_2\)) emissions, in the process of electricity production, the ICT facilities are indirectly responsible for considerable amounts of green house gas emissions. Heat generated by these densely populated data centers needs large cooling units to keep temperatures within the operational range. These cooling units, obviously, escalate the total energy consumption and have their own carbon footprint. In this survey, we discuss various aspects of the energy efficiency in data centers with the added emphasis on its motivation for data centers. In addition, we discuss various research ideas, industry adopted techniques and the issues that need our immediate attention in the context of energy efficiency in data centers. <s> BIB003 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> Interest has been growing in powering datacenters (at least partially) with renewable or "green" sources of energy, such as solar or wind. However, it is challenging to use these sources because, unlike the "brown" (carbon-intensive) energy drawn from the electrical grid, they are not always available. This means that energy demand and supply must be matched, if we are to take full advantage of the green energy to minimize brown energy consumption. In this paper, we investigate how to manage a datacenter's computational workload to match the green energy supply. In particular, we consider data-processing frameworks, in which many background computations can be delayed by a bounded amount of time. We propose GreenHadoop, a MapReduce framework for a datacenter powered by a photovoltaic solar array and the electrical grid (as a backup). GreenHadoop predicts the amount of solar energy that will be available in the near future, and schedules the MapReduce jobs to maximize the green energy consumption within the jobs' time bounds. If brown energy must be used to avoid time bound violations, GreenHadoop selects times when brown energy is cheap, while also managing the cost of peak brown power consumption. Our experimental results demonstrate that GreenHadoop can significantly increase green energy consumption and decrease electricity cost, compared to Hadoop. <s> BIB004 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> <s> OpenStack is a massively scalable open source cloud operating system that is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds. OpenStack provides series of interrelated projects delivering various components for a cloud infrastructure solution as well as controls large pools of storage, compute and networking resources throughout a datacenter that all managed through a dashboard(Horizon) that gives administrators control while empowering their users to provision resources through a web interface. In this paper, we present a comparative study of Cloud Computing Platform such as Eucalyptus, Openstack, CloudStack and Opennebula which is open source software, cloud computing layered model, components of OpenStack, architecture of OpenStack. Further discussing about how to install Openstack as well as how to build virtual machine (VM) in Openstack cloud using CLI on RHEL 6.4 and at last covering latest OpenStack releases Icehouse, which is used for building public, private, and hybrid clouds and introduce what new features added in Icehouse. The aim of this paper is to show mainly importance of OpenStack as a Cloud provider and give the best solution for service providers as well as enterprises. <s> BIB005
the concept of our society, i.e., immune system, human brain (neuron structure), ecosystem and human societies. Big data is providing numerous services and infrastructures to the companies and has opened new research directions in the field of computer science. Most of the current applications of the cloud computing uses distributed computing with varying degree of connectivity and interaction. Big data is providing computation and efficient processing to millions of users which has same complexity level just like CAS BIB001 . Apart from the complexity of the CAS, achieving energy efficiency in cloud computing and big data is a global challenge. There are plenty of methods which have been proposed by researchers to reduce power consumption in cloud and big data infrastructure. Most of the solution, proposes the powering off unused components. Other solutions are focused on optimal distribution of the data among different components BIB002 . Cloud computing provides numerous services to the users but poses certain challenges because of its complex nature. The devices used in the cloud are so large in number that complexity of such system is even more complex than human brain structure. However, apart from complexity, clouds also come across certain challenges like security and privacy of the data. Big data allow users to host data, access data and process data at any time. The volume of data is increasing with gigantic amount day by day, and no doubt the era of big data has arrived. Big data requires different management techniques to help communities (e.g., users) in performing their tasks quickly and efficiently. CAS helps in modeling user behaviors, which helps cloud provider to manage users efficiently. In order to have an energy efficient cloud infrastructure, we must understand the interaction between different components of the complex systems that consume power in order to meet energy requirements estimating power and performance trade off. Volume of data is increasing with amazing speed, i.e., 90 % of data available on big data is created in just last 2 years BIB001 . Facebook is also popular and processing data at high speed nearly 500 terabytes of data daily BIB005 . Large hadrons collider (LHC) computing grid is also contributing in vast data generation. Dozens of petabytes of data is produced daily and dissemination, transmission and processing is subject to consumption of huge amount of energy BIB003 . However, these data generators do not address how energy will be saved and used wisely to meet this ever increasing need of data. GreenHadoop is contributing toward energy efficiency using solar energy (Menon 2012). However, it comes across bottleneck when weather is cloudy for many days. Hadoop also uses different techniques, like map reduce which deals with how effectively a query will be answered it has no concern with energy efficiency. Big data is helping different companies in solving business problems with ease. Big data is utilizing hardware, software, algorithms and many related techniques to perform desired function, and utilizes standardized approaches to help users in performing their tasks with ease. Big data has always helped users by assuring that desired data is always accessible. However, the systems, servers, components and subsystems which are facilitating users, consumes enormous amount of energy. Big data is also servicing user with its unique features and at the same time it's facing variety of challenges. When we talk about volume of data, firstly, there might be an issue of data storage and secondly, privacy or integrity of the data is also a major concern. Users might be affected by viruses, Trojan horse and hackers. Another feature of big data is that information and data is always accessible to the user. On the other hand, user can come across the situation when data is not accessible due to poor network connection. Keeping in mind all issues, energy is another important concern which needs to be addressed. Due to increase in technology trends and growth in wired, wireless and mobile devices network, energy consumption has increased a lot. The increase in energy consumption has led to a huge demand for tools/techniques which could manage this growing demand of energy. Because of increase in the volume of data, more resources are required to hold data. Similarly, more energy is required as well to stabilize them . However, there exists no such technique that can efficiently address all energy consumption issues. Researchers and scientists are developing different techniques which aim to minimize energy consumption in big data. Energy consumption has always special concerns in cloud computing data centers where thousands of computers, servers, routers, switches and bridges are operating and consuming thousands kilowatt of energy. Stakeholders of cloud computing are thinking of efficient energy algorithms which reduce cost of energy BIB004 . Although there exist many surveys on the energy efficiency in big data but the existing research does not provide a thorough insight of energy efficiency in the context of big data and CASs. Our unique contribution is to provide energy utilization methods, techniques and algorithms for CAS. In this paper, we provide comprehensive evaluation of existing techniques in the form of tables (i.e., Tables 2, 3 , 4, 5, 6, 7), we provide extension and expansion of existing taxonomy of hardware based energy efficiency techniques as expressed in Fig. 5 . We estimate energy consumption per server class for year 2007 and onward in Table 2 . We provide component based taxonomy of energy efficient techniques in Table 1 . We examine big data in the context of complex adaptive systems and overview variety of services provided by cloud provider, challenges faced by cloud provider. We further identify hardware and software based techniques and approaches used for overcoming the energy demands of the cloud and outline different techniques and Finally, we present our findings about one of the best techniques for energy efficiency which has some limitations but is considered comparatively a better technique. The remainder of the paper is organized as follows: "Background" describes the background of the big data services, key challenges of big data and overview of the energy efficient techniques. "Critical analysis of existing surveys" provides the critical analysis of existing surveys. "Energy efficient techniques" details different techniques used in big data. We also provide the evaluation of each technique against certain parameters in the context of big data in this section. We provide our summary and findings in "Summary and findings". Some open issues with DVFS are elaborated in "Open issues". The paper is concluded in "Conclusion and future work" where the future directions are also elaborated.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Background <s> The frenetic development of the current architectures places a strain on the current state-of-the-art programming environments. Harnessing the full potential of such architectures has been a tremendous task for the whole scientific computing community. We present DAGuE a generic framework for architecture aware scheduling and management of micro-tasks on distributed many-core heterogeneous architectures. Applications we consider can be represented as a Direct Acyclic Graph of tasks with labeled edges designating data dependencies. DAGs are represented in a compact, problem-size independent format that can be queried on-demand to discover data dependencies, in a totally distributed fashion. DAGuE assigns computation threads to the cores, overlaps communications and computations and uses a dynamic, fully-distributed scheduler based on cache awareness, data-locality and task priority. We demonstrate the efficiency of our approach, using several micro-benchmarks to analyze the performance of different components of the framework, and a Linear Algebra factorization as a use case. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Background <s> MapReduce is a powerful paradigm that enables rapid implementation of a wide range of distributed data-intensive applications. The Hadoop project, its main open source implementation, has recently been widely adopted by the Cloud computing community. This paper aims to evaluate the cost of moving MapReduce applications to the Cloud, in order to find a proper trade-off between cost and performance for this class of applications. We provide a cost evaluation of running MapReduce applications in the Cloud, by looking into two aspects: the overhead implied by the execution of MapReduce jobs in the Cloud, compared to an execution on a Grid, and the actual costs of renting the corresponding Cloud resources. For our evaluation, we compared the runtime of 3 MapReduce applications executed with the Hadoop framework, in two environments: 1)on clusters belonging to the Grid’5000 experimental grid testbed and 2)in a Nimbus Cloud deployed on top of Grid’5000 nodes. <s> BIB002 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Background <s> In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions. <s> BIB003
Efficient energy consumption has remained a concern for researchers and experts because too much energy consumption also results in depletion of natural resources, which in turn increase pollution and cause health hazards. According to a survey (Goiri 2012), there is 6 % increase in CO 2 emission from information technology (IT) sector which is also a great hazard for human health. In recent years, various organizations like IBM, Google and Microsoft have developed data centers in which thousands of machines are running and consuming large amount of energy. In order to cope up with this challenge of energy, different techniques are developed which minimize energy consumption in data centers. Dealing with energy efficiency is necessary, otherwise, in coming few years cost of energy will increase from the cost of hardware. In order to deal with this issue, different software and hardware based techniques have been proposed and deployed in data centers BIB001 . Energy consumed in big datacenter is computed by determining how much energy is consumed by each device when its operating. Efficient utilization of energy has drawn much more attention from cost and environment perspectives BIB003 . When lots of machines are operating in the cloud infrastructure, this results in emission of CO 2 . The use of the Internet, exchange of data over Internet, and the processing and analytical demand result in lots of energy consumption. Therefore, power consumption methodology, control, check and balance of power resources are necessary along with the expendability and accessibility of big data. Different models have also been proposed for energy efficiency but each comes across with different bottlenecks because of service level and configuration changes (Krauth 2006) . However, this issue is resolved to very much extent by modern service providing companies. It is believed that every algorithm has some pitfalls. If we talk about resource usage, control on carbon emission and policies specific domain are really challenging to build one common solution for all. Some new techniques, i.e., virtualization, sampling are also contributing towards energy efficiency like map reduce and intelligent power saving architecture (ISPA) BIB002 . Big data Services are numerous which are supporting companies functioning rigorously. Big data is helping users to perform their tasks with its unique quality of services. Big data support networking services which has helped companies to develop CRM (Customer Relationship Management) and extend services to the user with the help of remote access and without time constraint. We briefly and precisely present the overview of big data services, big data challenges and critical review of different energy efficiency techniques in the context of CAS in the following sections.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> In this paper we address power conservation for clusters of workstations or PCs. Our approach is to develop systems that dynamically turn cluster nodes on – to be able to handle the load imposed on the system efficiently – and off – to save power under lighter load. The key component of our systems is an algorithm that makes load balancing and unbalancing decisions by considering both the total load imposed on the cluster and the power and performance implications of turning nodes off. The algorithm is implemented in two different ways: (1) at the application level for a cluster-based, localityconscious network server; and (2) at the operating system level for an operating system for clustered cycle servers. Our experimental results are very favorable, showing that our systems conserve both power and energy in comparison to traditional systems. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> The declining costs of commodity disk drives is rapidly changing the economics of deploying large amounts of online or near-line storage. Conventional mass storage systems use either high performance RAID clusters, automated tape libraries or a combination of tape and disk. In this paper, we analyze an alternative design using massive arrays of idle disks, or MAID. We argue that this storage organization provides storage densities matching or exceeding those of tape libraries with performance similar to disk arrays. Moreover, we show that with effective power management of individual drives, this performance can be achieved using a very small power budget. In particular, we show that our power management strategy can result in the performance comparable to an always-on RAID system while using 1/15th the power of such a RAID system. <s> BIB002 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment. <s> BIB003 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> Power management has become increasingly necessary in large-scale datacenters to address costs and limitations in cooling or power delivery. This paper explores how to integrate power management mechanisms and policies with the virtualization technologies being actively deployed in these environments. The goals of the proposed VirtualPower approach to online power management are (i) to support the isolated and independent operation assumed by guest virtual machines (VMs) running on virtualized platforms and (ii) to make it possible to control and globally coordinate the effects of the diverse power management policies applied by these VMs to virtualized resources. To attain these goals, VirtualPower extends to guest VMs `soft' versions of the hardware power states for which their policies are designed. The resulting technical challenge is to appropriately map VM-level updates made to soft power states to actual changes in the states or in the allocation of underlying virtualized hardware. An implementation of VirtualPower Management (VPM) for the Xen hypervisor addresses this challenge by provision of multiple system-level abstractions including VPM states, channels, mechanisms, and rules. Experimental evaluations on modern multicore platforms highlight resulting improvements in online power management capabilities, including minimization of power consumption with little or no performance penalties and the ability to throttle power consumption while still meeting application requirements. Finally, coordination of online methods for server consolidation with VPM management techniques in heterogeneous server systems is shown to provide up to 34% improvements in power consumption. <s> BIB004 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> Hadoop Distributed File System (HDFS) presents unique challenges to the existing energy-conservation techniques and makes it hard to scale-down servers. We propose an energy-conserving, hybrid, logical multi-zoned variant of HDFS for managing data-processing intensive, commodity Hadoop cluster. Green HDFS's data-classification-driven data placement allows scale-down by guaranteeing substantially long periods (several days) of idleness in a subset of servers in the datacenter designated as the Cold Zone. These servers are then transitioned to high-energy-saving, inactive power modes. This is done without impacting the performance of the Hot zone as studies have shown that the servers in the data-intensive compute clusters are under-utilized and, hence, opportunities exist for better consolidation of the workload on the Hot Zone. Analysis of the traces of a Yahoo! Hadoop cluster showed significant heterogeneity in the data's access patterns which can be used to guide energy-aware data placement policies. The trace-driven simulation results with three-month-long real-life HDFS traces from a Hadoop cluster at Yahoo! show a 26% energy consumption reduction by doing only Cold zone power management. Analytical cost model projects savings of $14.6 million in 3-year total cost of ownership (TCO) and simulation results extrapolate savings of $2.4 million annually when Green-HDFS technique is applied across all Hadoop clusters (amounting to 38000 servers) at Yahoo. <s> BIB005 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> Energy efficiency is increasingly important for future information and communication technologies (ICT), because the increased usage of ICT, together with increasing energy costs and the need to reduce green house gas emissions call for energy-efficient technologies that decrease the overall energy consumption of computation, storage and communications. Cloud computing has recently received considerable attention, as a promising approach for delivering ICT services by improving the utilization of data centre resources. In principle, cloud computing can be an inherently energy-efficient technology for ICT provided that its potential for significant energy savings that have so far focused on hardware aspects, can be fully explored with respect to system operation and networking aspects. Thus this paper, in the context of cloud computing, reviews the usage of methods and technologies currently used for energy-efficient operation of computer hardware and network infrastructure. After surveying some of the current best practice and relevant literature in this area, this paper identifies some of the remaining key research challenges that arise when such energy-saving techniques are extended for use in cloud computing environments. <s> BIB006 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Fig. 3 Energy efficiency techniques <s> Energy saving has become a crucial concern in datacenters as several reports predict that the anticipated energy costs over a three year period will exceed hardware acquisition. In particular, saving energy for storage is of major importance as storage devices (and cooling them off) may contribute over 25 percent of the total energy consumed in a datacenter. Recent work introduced the concept of energy proportionality and argued that it is a more relevant metric than just energy saving as it takes into account the tradeoff between energy consumption and performance. In this paper, we present a novel approach, called FREP (Fractional Replication for Energy Proportionality), for energy management in large datacenters. FREP includes a replication strategy and basic functions to enable flexible energy management. Specifically, our method provides performance guarantees by adaptively controlling the power states of a group of disks based on observed and predicted workloads. Our experiments using a set of real and synthetic traces show that FREP dramatically reduces energy requirements with a minimal response time penalty. <s> BIB007
Page 12 of 29 Majeed and Shah Complex Adapt Syst Model (2015) 3:6 makes the deployment a bit technical. Most of the data centers deployed variety of software techniques in combination with hardware techniques to achieve energy efficiency. Energy consumption which was surveyed in previous years in server class (W/Unit) from 2000 to 2006 is summarized in Table 2 (Valentini et al. 2011b) . With the increasing amount of the data, the energy consumption is increasing every day. Energy consumption is increasing with server classes as depicted in Table 2 . The use of certain techniques and approaches can reduce that power consumption. Beyond the services provided by the cloud, certain approaches need significant improvements. Power consumption tradeoff is not suitable in the big data environment. The development of the tools is not up to the mark and thus it does not simulate and model the behavior of the users over specific period of the time efficiently and accurately. The tools must be capable to model self-organization and other complex phenomena related to human life. Some of the cloud providers are unstructured, i.e., P2P system which requires applications and development of different tools to cater growing energy needs. Modern systems are unstructured and therefore algorithms like self-organized power consumption approximation algorithm (SOPCA), which are used to monitor power consumption of the different devices. Modern complex systems not only need to change ranges and other parameters but also need to model and simulate the behavior of the entities. Some tools have been developed to handle this task but these tools are very limited in scope. In order to get better understanding and accurate results, some tools like Net Logo and agent based toolkit have been proposed and used by researchers to model complexity of the CAS. One of the earlier works in which power management has been applied at the data center level has been done by BIB001 . In their work, the authors have proposed a technique for energy efficiency in heterogeneous cluster of nodes serving as web applications. The main contribution of this work was concentrating the workload of each node and switching idle nodes off. However, the load balancing and weak implementation of SLAs results in performance degradation. BIB004 have studied power management techniques in the context of virtualized data centers. The authors have introduced and applied a power management technique named "soft resource scaling". However, the adoption and implementation of this technique has not achieved required result because of guest operating systems which were legacy or power unaware. BIB003 have suggested putting network interfaces, links, switches and routers into sleep modes when they are idle in order to save the energy consumed by the Internet backbone and consumers. However, the adoption of such technique result in communication loss if necessary components are in sleep mode and power consumption at wake up of different devices. Disks design also contributes in energy efficiency, the authors BIB002 has presented the concept of MAID (massive arrays of idle disks), a technique which power off the unused disks when they are not in use. That is basically an array of disks spins which writes recently used data on cache disks. However, these cache always remain spin up and regular disks remains idle which in turn increases the energy consumption. BIB007 presented a novel approach, called FREP (Fractional Replication for Energy Proportionality), for energy management in big data. FREP includes a replication strategy and basic functions to enable flexible energy management according to the cloud needs, including load distribution and update consistency. However, the impact of the replication on the over storage cost of the system has not presented. BIB005 proposed an energy conserving hybrid multi zone variant of HDFS for intensive data processing, commodity Hadoop clusters. This variant has considerably improved energy efficiency up to 26 % in 3 months as a simulation run. This technique has cut the power budget to $14.6 million dollars. Different types of cloud infrastructures including traditional cloud and high performance computing (HPC), need to be enhanced to support dynamic power demands (i.e., adjust powers automatically), which in turn creates new challenges in designing architecture, infrastructure, and communications which are energy efficient and power aware resources. This concept was given by . A comprehensive survey about energy saving strategies in both network and computer system that has potential impact in saving energy of integrated systems is given by BIB006 . highlight the energy concerns while designing system, performance and energy efficient application development. They explained the goal of the computer system design shift to power and energy concerns. The authors carried out a detailed survey about the power consumption problems, different hardware and firm level techniques, how operating system contributes toward energy efficiency, and data center level technique of energy efficiency and importance of virtualization in data centers to achieve energy efficiency. The detailed survey also explains the power consumption at different levels in computing system in terms of electricity bills, power budget and Co 2 emission. DVFS has offered great reduction in energy consumption in cloud infrastructure by changing voltage and frequency according to workload. The implementation of such technique in the cloud has reduced the power consumption significantly. Most of the cloud has implemented this technique which is CPU level technique the most energy absorption component. DVFS has attained lots of attention from research community being adoptive and efficient. Complex adaptive system modeling and simulations are used to clearly communicate the facts about the complex systems nature. The entities interaction and co-ordination helps in understanding behaviors of the complex systems. To manage and meet energy needs in complex systems some of the approaches have been proposed and used by cloud providers. Intelligent self-organizing power-saving architecture (ISPA), which assists in identifying suitable idle computers intelligently, let the system shut down or hibernate automatically based on a uniform rule-based company-wide policy. This architecture results in minimum performance loss as compared to other techniques. The detailed description of the hardware and software based techniques is elaborated in the next section.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Parameter(s) Evaluation <s> Dynamic power management (DPM) is a design methodology for dynamically reconfiguring systems to provide the requested services and performance levels with a minimum number of active components or a minimum load on such components. DPM encompasses a set of techniques that achieves energy-efficient computation by selectively turning off (or reducing the performance of) system components when they are idle (or partially unexploited). In this paper, we survey several approaches to system-level dynamic power management. We first describe how systems employ power-manageable components and how the use of dynamic reconfiguration can impact the overall power consumption. We then analyze DPM implementation issues in electronic systems, and we survey recent initiatives in standardizing the hardware/software interface to enable software-controlled power management of hardware components. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Parameter(s) Evaluation <s> This paper presents a novel run-time dynamic voltage scaling scheme for low-power real-time systems. It employs software feedback control of supply voltage, which is applicable to off-the-shelf processors. It avoids interface problems from variable clock frequency. It provides efficient power reduction by fully exploiting slack time arising from workload variation. Using software analysis environment, the proposed scheme is shown to achieve 80~94% power reduction for typical real-time multimedia applications. <s> BIB002
Performance Over all Satisfactory in a small enterprise. Can be improved when next transition is already known or have a system model which determine transition interval in order to avoid from overhead of activation and deactivation Goal To achieve maximum energy efficiency and minimize energy consumption. Cost (in terms of man power) In terms of man power these techniques are hard to develop which require more efforts, advanced techniques and latest technology implication makes it more costly Switching cost Whenever switching is done which not only degrade performance but also increase energy consumption Figure 5 , summarizes all hardware techniques which are supporting energy efficiency. Hardware support is a key to achieve energy efficiency using algorithms, policies and software approaches. Hardware are properly evaluated and tested by reputed companies before deployment to achieve energy efficiency effectively. All those companies who are investing money to cope up with energy issue using hardware are benefiting more than those who are investing in software. Different software and hardware techniques and their implementation produce desired results. Recent advancements are remarkable which have enhanced big data popularity by all means, and delivering services to intended user in cost effective and desired way. Performance evaluation of desired technique is expressed in Table 3 with few important parameters which are used to assess its performance. DCD is further divided into various techniques, i.e., predictive and stochastic which contribute toward energy efficiency. If we talk about predictive techniques on the basis of prediction, decision will be made when to activate and deactivate the system components. Different policies exist, which ensures the correlation between active and inactive states. Energy is consumed when we let different components to wake up and go to sleep, which also hinders performance overhead and cause serious drawbacks. Predictive wakeup and predictive shutdown provide solution to above problem these are on the other hand provides best solution to deal with the above mentioned problem. However, certain issues related to intelligence are implemented in these mechanisms. Predictive shutdown policies address the issues of inactivity. According to the instance or situation of the predictive shutdown, historical data predicts the next idle period. These approaches involve decision making and are highly dependent on actual utilization of energy and the strength of co-relation between previous and next events. History predictors are energy efficient but they are not as safer as timeouts which works on predictions BIB001 . However, predictions are not supportive in many situations. Predictive wakeup techniques aim to reduce the energy which is consumed on activation. Meanwhile, most of the components require lots of energy at wakeup. The transition from active to inactive state is computed on the basis of some previous record, and sometime on the requirement of the user (Albers 2010). In these techniques, energy consumption is high but minimum performance overheads are there on wakeup. Performance evaluation of fixed timeout, predictive shutdown and predictive wakeup is expressed in Table 4 . The accuracy of such techniques is determined in terms of the complexity, performance, maintenance, costs and energy efficiency. In the above section, the related concepts to SPM and sub techniques have been explored. All these techniques are static. In order to deal with problem of intelligently determining idle components, the adoptive techniques have been developed. Prediction about next transition is inefficient when workload is not determined in advance. Several practical techniques have been discussed in the literature which mainly focuses on energy efficiency . SPM considers architecture of the RAMs and CPU and related components. SPM is specially designed to control the internal structure of CPU, including circuits, chips structure of buses and ports. SPM uses intelligent approaches to determine the transition and sequences of inactive and active states. Cost (in terms of money) The development and implementation of these techniques increase cost in terms of man power and in the terms of money The development and implementation of these techniques increase cost in terms of man power and in the terms of money The development and implementation of these techniques increase cost in terms of man power and in the terms of money Performance DVFS provide good performance, it reduces the instruction that processor issue in a particular instance of time therefore which results in power reduction Energy efficiency Provide good energy efficiency if workload is known or dividing task and assigning different frequency. But still it's better from static power management V and P relationship As equation suggest that relationship between power and voltage is quadratic However, it might not be quadratic sometime linear and sometime nonlinear depend on interactions Complexity DVFS architecture is much complex and sometime structure of the system also increase its complexity Cost (in terms of money) The implementation of the same logic on chip is required huge efforts. Due to the technicalities involved it is costly Maintenance At each instruction, the CPU frequencies need to be adjusted so it's hard to operate and improvement and enhancement is not always easy Response time RT consists of non-linearity but it's executed fast so it provide better Response Time Sometime program execution is independent to cpu, I/O bound processes executed without CPU involvement During powernap the implementation was tested on different systems and their transitions were determined and comparisons have been made at different states. Different conclusion have been drawn based on the assumption that if switching time is less than 10 ms or equal than 10 ms power savings are approximately smooth and linear and are more than DVFS. However, in ideal situation the transition time is 300 ms. desired requirements are hard to meet but if authors have determined the mechanism for transition time then average server power can be reduced to 74 % BIB002 . Performance evaluation of powernap is provided in Table 6 where we compare it on the basis of certain parameters such as complexity and cost etc.
Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> Mobile computers typically spin down their hard disk after a fixed period of inactivity. If this threshold is too long, the disk wastes energy; if it is too short, the delay due to spinning the disk up again frushates the user. Usage patterns change over time, so a single fixed threshold may not be appropriate at all times. Also, different users may have varying pri- orities with respect to trading off energy conservation against performance. We describe a method for vary- ing the spin-down threshold dynamically by adapting to the user's access patterns and priorities. Adaptive spin-down can in some circumstances reduce by up to 507o the number of disk spin-ups that are deemed by the user to be inconvenient, while only moderately increasing energy consumption. <s> BIB001 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> This paper presents a novel run-time dynamic voltage scaling scheme for low-power real-time systems. It employs software feedback control of supply voltage, which is applicable to off-the-shelf processors. It avoids interface problems from variable clock frequency. It provides efficient power reduction by fully exploiting slack time arising from workload variation. Using software analysis environment, the proposed scheme is shown to achieve 80~94% power reduction for typical real-time multimedia applications. <s> BIB002 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> Scalability of the core frequency is a common feature of low-power processor architectures. Many heuristics for frequency scaling were proposed in the past to find the best trade-off between energy efficiency and computational performance. With complex applications exhibiting unpredictable behavior these heuristics cannot reliably adjust the operation point of the hardware because they do not know where the energy is spent and why the performance is lost.Embedded hardware monitors in the form of event counters have proven to offer valuable information in the field of performance analysis. We will demonstrate that counter values can also reveal the power-specific characteristics of a thread.In this paper we propose an energy-aware scheduling policy for non-real-time operating systems that benefits from event counters. By exploiting the information from these counters, the scheduler determines the appropriate clock frequency for each individual thread running in a time-sharing environment. A recurrent analysis of the thread-specific energy and performance profile allows an adjustment of the frequency to the behavioral changes of the application. While the clock frequency may vary in a wide range, the application performance should only suffer slightly (e.g. with 10% performance loss compared to the execution at the highest clock speed). Because of the similarity to a car cruise control, we called our scheduling policy Process Cruise Control. This adaptive clock scaling is accomplished by the operating system without any application support.Process Cruise Control has been implemented on the Intel XScale architecture, that offers a variety of frequencies and a set of configurable event counters. Energy measurements of the target architecture under variable load show the advantage of the proposed approach. <s> BIB003 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> This work examines fundamental tradeoffs incurred by a speed scaler seeking to minimize the sum of expected response time and energy use per job. We prove that a popular speed scaler is 2-competitive for this objective and no "natural" speed scaler can do better. Additionally, we prove that energy-proportional speed scaling works well for both Shortest Remaining Processing Time (SRPT) and Processor Sharing (PS) and we show that under both SRPT and PS, gated-static speed scaling is nearly optimal when the mean workload is known, but that dynamic speed scaling provides robustness against uncertain workloads. Finally, we prove that speed scaling magnifies unfairness under SRPT but that PS remains fair under speed scaling. These results show that these speed scalers can achieve any two, but only two, of optimality, fairness, and robustness. <s> BIB004 </s> Energy efficiency in big data complex systems: a comprehensive survey of modern energy saving techniques <s> Dynamic voltage and frequency scaling <s> MapReduce workloads have evolved to include increasing amounts of time-sensitive, interactive data analysis; we refer to such workloads as MapReduce with Interactive Analysis (MIA). Such workloads run on large clusters, whose size and cost make energy efficiency a critical concern. Prior works on MapReduce energy efficiency have not yet considered this workload class. Increasing hardware utilization helps improve efficiency, but is challenging to achieve for MIA workloads. These concerns lead us to develop BEEMR (Berkeley Energy Efficient MapReduce), an energy efficient MapReduce workload manager motivated by empirical analysis of real-life MIA traces at Facebook. The key insight is that although MIA clusters host huge data volumes, the interactive jobs operate on a small fraction of the data, and thus can be served by a small pool of dedicated machines; the less time-sensitive jobs can run on the rest of the cluster in a batch fashion. BEEMR achieves 40-50% energy savings under tight design constraints, and represents a first step towards improving energy efficiency for an increasingly important class of datacenter workloads. <s> BIB005
DVFS contributes well in energy efficiency especially in the cloud environment. CPU frequencies need proper adjustment but frequency adjustment requires voltage scaling as well. Both these parameters need adjustments collectively in order to contribute towards energy efficiency. Sometime increase in voltage causes increase in temperature which in turn increases energy consumptions. DVFS minimizes the number of instructions that can be issued by the CPU in a particular instance of time, which results in the reduced performance. This in turn increases performance overhead especially for CPU bound processes. Researchers and designers are exploring the same issue from several years but are unable to provide optimal solution. General formula used for voltage and frequency calculation and related parameter details is expressed in ) which given as: DVFS looks straightforward but implementation is not so easy. The structure of real system has imposed certain technicalities on the DVFS. Production of desired frequency to meet application performance is also tricky. However, the authors are not sure about power consumed by processor its quadratic, linear or non-linear to voltage supplied BIB005 . Several approaches have been practiced that reduce energy consumption. This energy consumption can be categorized as interval based; intra-task based and inter-task based (Hwang and Wu 2000) . Interval based technique is same as adaptive technique which predicts the CPU cycles and transitioning is done in various orders. Inter-task approach dynamically distinguishes between processes based on their execution time and assign them a different CPU speed (Hwang and Wu 2000; BIB001 . However, this can cause an issue when different scheduling algorithms are applied because execution time using round robin (RR) scheduling algorithm will be different than first come first served (FCFS) algorithm. Voltage and frequencies can be best adjusted if we know the workload in advance, or its constant throughout the execution. In comparison with inter-task, intra-task approach provides fine grained information about the structure of the programs and tune the processor voltage and frequency in the tasks effectively (Buttazzo 2002; BIB004 BIB003 . Performance evaluation of DFVS is provided in Table 5. DVFS is always concerned with energy saving from its efficient energy scheduling method. It saves energy when peak performance of any component is not required. It also adjusts CPU cycles, when CPU is not doing useful work, i.e., reading data from DVFS scheduling is one of the best technique, which contribute toward energy efficiency. DVFS uses A2E which makes it different from all other techniques available for energy efficiency. It scales up and down voltage and frequency so well that performance is not hindered. DVFS uses simple method to save energy which is high enough to keep servers on all the time. However, for most data intensive solutions it may not be suitable option because these applications mostly use read/write operation. It compete all other techniques which are available for energy saving with minimum performance compromises. This is adoptive and scheduling is runtime which is a key to success. This is the reason DVFS is mostly used by companies who are crowd king of big data BIB002 . Dynamic voltage and frequency scaling is deployed in many data centers to fulfill the energy needs. The devices needs to be built with service oriented and energy oriented architecture. The performance evaluation of the DVFS is provided in Table 5 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> INTRODUCTION <s> Engineered systems are often built of recurring circuit modules that carry out key functions. Transcription networks that regulate the responses of living cells were recently found to obey similar principles: they contain several biochemical wiring patterns, termed network motifs, which recur throughout the network. One of these motifs is the feed-forward loop (FFL). The FFL, a three-gene pattern, is composed of two input transcription factors, one of which regulates the other, both jointly regulating a target gene. The FFL has eight possible structural types, because each of the three interactions in the FFL can be activating or repressing. Here, we theoretically analyze the functions of these eight structural types. We find that four of the FFL types, termed incoherent FFLs, act as sign-sensitive accelerators: they speed up the response time of the target gene expression following stimulus steps in one direction (e.g., off to on) but not in the other direction (on to off). The other four types, coherent FFLs, act as sign-sensitive delays. We find that some FFL types appear in transcription network databases much more frequently than others. In some cases, the rare FFL types have reduced functionality (responding to only one of their two input stimuli), which may partially explain why they are selected against. Additional features, such as pulse generation and cooperativity, are discussed. This study defines the function of one of the most significant recurring circuit elements in transcription networks. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> INTRODUCTION <s> BackgroundThere has been tremendous interest in the study of biological network structure. An array of measurements has been conceived to assess the topological properties of these networks. In this study, we compared the metabolic network structures of eleven single cell organisms representing the three domains of life using these measurements, hoping to find out whether the intrinsic network design principle(s), reflected by these measurements, are different among species in the three domains of life.ResultsThree groups of topological properties were used in this study: network indices, degree distribution measures and motif profile measure. All of which are higher-level topological properties except for the marginal degree distribution. Metabolic networks in Archaeal species are found to be different from those in S. cerevisiae and the six Bacterial species in almost all measured higher-level topological properties. Our findings also indicate that the metabolic network in Archaeal species is similar to the exponential random network.ConclusionIf these metabolic network properties of the organisms studied can be extended to other species in their respective domains (which is likely), then the design principle(s) of Archaea are fundamentally different from those of Bacteria and Eukaryote. Furthermore, the functional mechanisms of Archaeal metabolic networks revealed in this study differentiate significantly from those of Bacterial and Eukaryotic organisms, which warrant further investigation. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> INTRODUCTION <s> The first comprehensive book on the emerging field of network science, Network Science: Theory and Applications is an exhaustive review of terms, ideas, and practices in the various areas of network science. In addition to introducing theory and application in easy-to-understand, topical chapters, this book describes the historical evolution of network science through the use of illustrations, tables, practice problems with solutions, case studies, and applications to related Java software. Researchers, professionals, and technicians in engineering, computing, and biology will benefit from this overview of new concepts in network science. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> INTRODUCTION <s> Professor Barabási's talk described how the tools of network science can help understand the Web's structure, development and weaknesses. The Web is an information network, in which the nodes are documents (at the time of writing over one trillion of them), connected by links. Other well-known network structures include the Internet, a physical network where the nodes are routers and the links are physical connections, and organizations, where the nodes are people and the links represent communications. <s> BIB004
Networks (or graphs) are a very flexible and powerful way of modeling many real-world systems. In its essence, they capture the interactions of a system, by representing entities as nodes and their relations as edges connecting them (e.g., people are nodes in social networks and edges connect those that have some relationship between them, such as friendships or citations). Networks have thus been used to analyze all kinds of social, biological and communication processes . Extracting information from networks is therefore a vital interdisciplinary task that has been emerging as a research area by itself, commonly known as Network Science BIB004 BIB003 . One very common and important methodology is to look at the networks from a subgraph perspective, identifying the characteristic and recurrent connection patterns. For instance, network motif analysis has identified the feed-forward loop as a recurring and crucial functional pattern in many real biological networks, such as gene regulation and metabolic networks BIB001 BIB002 ]. Another example is the usage of graphlet-degree distributions to show that protein-protein interaction networks are more akin to geometric graphs than with traditional scale-free models . At the heart of these topologically rich approaches lies the subgraph counting problem, that is, the ability to compute subgraph frequencies. However, this is a very hard computational task. In fact, determining if one subgraph exists at all in another larger network (i.e., subgraph isomorphism ) is an NP-Complete problem . Determining the exact frequency is even harder, and millions or even billions of subgraph occurrences are typically found even in relatively small networks. Given both its usefulness and hard tractability, subgraph counting has been raising a considerable amount of interest from the research community, with a large body of published literature. This survey aims precisely to organize and summarize these research results, providing a comprehensive overview of the field. Our main contributions are the following: • A comprehensive review of algorithms for exact subgraph counting. We give a structured historical perspective on algorithms for computing exact subgraph frequencies. We provide a complete overview table in which we employ a taxonomy that allows to classify all algorithms on a set of key characteristics, highlighting their main similarities and differences. We also identify and describe the main conceptual ideas, giving insight on their main advantages and possible limitations. We also provide links to existing implementations, exposing which approaches are readily available. • A comprehensive review of algorithms for approximate subgraph counting. Given the hardness of the problem, many authors have resorted to approximation schemes, which allow trading some accuracy for faster execution times. As on the exact case, we provide historical context, links to implementations and we give a classification and description of key properties, explaining how the existing approaches deal with the balance between precision and running time. • A comprehensive review of parallel subgraph counting methodologies. It is only natural that researchers have tried to harness the power of parallel architectures to provide scalable approaches that might decrease the needed computation time. As before, we provide an historical overview, coupled with classification on a set of important aspects, such as the type of parallel platform or availability of an implementation. We also give particular attention to how the methodologies tackle the unbalanced nature of the search space. We complement this journey trough the algorithmic strategies with a clear formal definition of the subgraph counting problem being discussed here, an overview of its applications and complete and a large number of references to related work that is not directly in the scope of this article. We believe that this survey provides the reader with an insightful and complete perspective on the field, both from a methodological and an application point of view. The remainder of this paper is structured as follows. Section 2 presents necessary terminology, formally describes subgraph counting, and describes possible applications related subgraph counting. Section 3 reviews exact algorithms, divided between full enumeration and analytical methods. Approximate algorithms are described in Section 4 and parallel strategies are presented in Section 5. Finally, in Section 6 we give our concluding remarks.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Problem statement <s> Network motifs, patterns of local interconnections with potential functional properties, are important for the analysis of biological networks. To analyse motifs in networks the first step is to find patterns of interest. This paper presents 1) three different concepts for the determination of pattern frequency and 2) an algorithm to compute these frequencies. The different concepts of pattern frequency depend on the reuse of network elements. The presented algorithm finds all or highly frequent patterns under consideration of these concepts. The utility of this method is demonstrated by applying it to biological data. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Problem statement <s> Motivation: Small-induced subgraphs called graphlets are emerging as a possible tool for exploration of global and local structure of networks and for analysis of roles of individual nodes. One of the obstacles to their wider use is the computational complexity of algorithms for their discovery and counting. Results: We propose a new combinatorial method for counting graphlets and orbit signatures of network nodes. The algorithm builds a system of equations that connect counts of orbits from graphlets with up to five nodes, which allows to compute all orbit counts by enumerating just a single one. This reduces its practical time complexity in sparse graphs by an order of magnitude as compared with the existing pure enumeration-based algorithms. Availability and implementation: Source code is available freely at http://www.biolab.si/supp/orca/orca.html. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Problem statement <s> The complexity of the subgraph isomorphism problem where the pattern graph is of fixed size is well known to depend on the topology of the pattern graph. Here, we present two results which, in contrast, provide evidence that no topology of an induced subgraph of fixed size can be substantially easier to detect or count than an independent set of related size.We show that any fixed pattern graph having a maximum independent set of size k that is disjoint from other maximum independent sets is not easier to detect as an induced subgraph than an independent set of size k. It follows in particular that an induced path on 2 k - 1 vertices is not easier to detect than an independent set on k vertices, and that an induced cycle on 2k vertices is not easier to detect than an independent set on k vertices. In view of linear time upper bounds on the detection of induced path of length two and three, our lower bound is tight. Similar corollaries hold for the detection of induced complete bipartite graphs and an induced paw and its generalizations.We show also that for an arbitrary pattern graph H on k vertices with no isolated vertices, there is a simple subdivision of H, resulting from splitting each edge into a path of length four and attaching a distinct path of length three at each vertex of degree one, that is not easier to detect or count than an independent set on k vertices, respectively.Next, we show that the so-called diamond and its generalizations on k vertices are not easier to detect as induced subgraphs than an independent set on three vertices or an independent set on k vertices, respectively. For C 4 , we give a weaker evidence of its hardness in terms of an independent set on three vertices.Finally, we derive several results relating the complexity of the edge-colored variant of induced subgraph isomorphism to that of the standard variant. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Problem statement <s> BACKGROUND ::: Biological networks provide great potential to understand how cells function. Network motifs, frequent topological patterns, are key structures through which biological networks operate. Finding motifs in biological networks remains to be computationally challenging task as the size of the motif and the underlying network grow. Often, different copies of a given motif topology in a network share nodes or edges. Counting such overlapping copies introduces significant problems in motif identification. ::: ::: ::: RESULTS ::: In this paper, we develop a scalable algorithm for finding network motifs. Unlike most of the existing studies, our algorithm counts independent copies of each motif topology. We introduce a set of small patterns and prove that we can construct any larger pattern by joining those patterns iteratively. By iteratively joining already identified motifs with those patterns, our algorithm avoids (i) constructing topologies which do not exist in the target network (ii) repeatedly counting the frequency of the motifs generated in subsequent iterations. Our experiments on real and synthetic networks demonstrate that our method is significantly faster and more accurate than the existing methods including SUBDUE and FSG. ::: ::: ::: CONCLUSIONS ::: We conclude that our method for finding network motifs is scalable and computationally feasible for large motif sizes and a broad range of networks with different sizes and densities. We proved that any motif with four or more edges can be constructed as a join of the small patterns. <s> BIB004
Making use of previous concepts and terminology, we now give a more formal definition of the problem tackled by this survey: . Given a set G of non-isomorphic subgraphs and a graph G, determine the frequency of all induced matches of the subgraphs G s ∈ G in G. Two occurrences are considered different if they have at least one node or edge that they do not share. This problem is also known as subgraph census. In short, one wants to extract the occurrences of all subgraphs of a given size, or just a smaller set of "interesting" subgraphs, contained in a large graph G. Note how here the input is a single graph, in contrast with Frequent Subgraph Mining (FSM) where collections of graphs are more commonly used (differences between Subgraph Counting and FSM are discussed in Section 2.4.5). Approaches diverge on which subgraphs are counted in G. Network-centric methods extract all k−node occurrences in G and then assess each occurrence's isomorphic type. On the other end of the spectrum, subgraph-centric methods first pick a isomorphic class and then only count occurrences matching that class in G. Therefore, subgraph-centric methods are preferable to network-centric algorithms when only one or a few different subgraphs are to be counted. Set-centric approaches are middle-ground algorithms that take as input a set of interesting subgraphs and only count those on G. This work is mainly focused on network-centric algorithms, while not limited to them, since: (a) exploring all subgraphs offers the most information possible when applying subgraph counting to a real dataset, (b) hand-picking a set of interesting subgraphs might might be hard or impossible and could be heavily dependent on our knowledge of the dataset, (c) it is intrinsically the most general approach. It is obviously possible to use subgraph-centric methods to count all isomorphic classes, simply by executing the method once per isomorphic type. However, that option is only feasible for small subgraph sizes because larger k values produce too many subgraphs (see Table 1 ) and it is likely that a network only has a small subset of them, meaning that the method would spend a considerable amount of time looking for features that do not exist, while network-centric methods always do useful work since they count occurrences in the network. Here we are mainly interested in algorithms that count induced subgraphs, but non-induced subgraphs counting algorithms are also considered. Counting one or the other is equivalent since it is possible to obtain induced occurrences from non-induced occurrences, and vice-versa. However, we should note that, at the end of the counting process, induced occurrences need to be obtained by the algorithm. This choice penalizes non-induced subgraph counting algorithms since the transformation is quadratic on the number of subgraphs BIB003 . Some algorithms count orbits instead of subgraphs BIB002 . However, counting orbits can be reduced to counting subgraphs and, therefore, these algorithms are also considered. We should note that we only consider the most common and well studied subgraph frequency definition, in which different occurrences might share a partial subset of nodes and edges, but there are other possible frequency concepts, in which this overlap is explicitly disallowed BIB004 BIB001 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> For a pattern graph H on k nodes, we consider the problems of finding and counting the number of (not necessarily induced) copies of H in a given large graph G on n nodes, as well as finding minimum weight copies in both node-weighted and edge-weighted graphs. Our results include: The number of copies of an H with an independent set of size s can be computed exactly in O*(2s nk-s+3) time. A minimum weight copy of such an H (with arbitrary real weights on nodes and edges) can be found in O(4s+o(s) nk-s+3) time. (The O* notation omits (k) factors.) These algorithms rely on fast algorithms for computing the permanent of a k x n matrix, over rings and semirings. The number of copies of any H having minimum (or maximum) node-weight (with arbitrary real weights on nodes) can be found in O(nω k/3 + n2k/3+o(1)) time, where ω < 2.4 is the matrix multiplication exponent and k is divisible by 3. Similar results hold for other values of k. Also, the number of copies having exactly a prescribed weight can be found within this time. These algorithms extend the technique of Czumaj and Lingas (SODA 2007) and give a new (algorithmic) application of multiparty communication complexity. Finding an edge-weighted triangle of weight exactly 0 in general graphs requires Ω(n2.5-ε) time for all ε > 0, unless the 3SUM problem on N numbers can be solved in O(N2 - ε) time. This suggests that the edge-weighted problem is much harder than its node-weighted version. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> The problems studied in this article originate from the Graph Motif problem introduced by Lacroix et al. (IEEE/ACM Trans. Comput. Biol. Bioinform. 3(4):360---368, 2006) in the context of biological networks. The problem is to decide if a vertex-colored graph has a connected subgraph whose colors equal a given multiset of colors M. It is a graph pattern-matching problem variant, where the structure of the occurrence of the pattern is not of interest but the only requirement is the connectedness. Using an algebraic framework recently introduced by Koutis (Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5125, pp. 575---586, 2008) and Koutis and Williams (Proceedings of the 36th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5555, pp. 653---664, 2009), we obtain new FPT algorithms for Graph Motif and variants, with improved running times. We also obtain results on the counting versions of this problem, proving that the counting problem is FPT if M is a set, but becomes #W[1]-hard if M is a multiset with two colors. Finally, we present an experimental evaluation of this approach on real datasets, showing that its performance compares favorably with existing software. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> A great variety of systems in nature, society and technology -- from the web of sexual contacts to the Internet, from the nervous system to power grids -- can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via email, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Given a multiset of colors as the query and a list-colored graph, i.e., an undirected graph with a set of colors assigned to each of its vertices, in the NP-hard list-colored graph motif problem the goal is to find the largest connected subgraph such that one can select a color from the set of colors assigned to each of its vertices to obtain a subset of the query. This problem was introduced to find functional motifs in biological networks. We present a branch-and-bound algorithm named RANGI for finding and enumerating list-colored graph motifs. As our experimental results show, RANGI's pruning methods and heuristics make it quite fast in practice compared to the algorithms presented in the literature. We also present a parallel version of RANGI that achieves acceptable scalability. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Network motifs are small over represented patterns that have been used successfully to characterize complex networks. Current algorithmic approaches focus essentially on pure topology and disregard node and edge nature. However, it is often the case that nodes and edges can also be classified and separated into different classes. This kind of networks can be modeled by colored (or labeled) graphs. Here we present a definition of colored motifs and an algorithm for efficiently discovering them.We use g-tries, a specialized data-structure created for finding sets of subgraphs. G-Tries encapsulate common sub-structure, and with the aid of symmetry breaking conditions and a customized canonization methodology, we are able to efficiently search for several colored patterns at the same time. We apply our algorithm to a set of representative complex networks, showing that it can find colored motifs and outperform previous methods. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> We tackle the problem of counting the number qk of k-cliques in large-scale graphs, for any constant k ≥ 3. Clique counting is essential in a variety of applications, including social network analysis. Our algorithms make it possible to compute qk for several real-world graphs and shed light on its growth rate as a function of k. Even for small values of k, the number qk of k-cliques can be in the order of tens or hundreds of trillions. As k increases, different graph instances show different behaviors: while on some graphs qk + 1 Due to the computationally intensive nature of the clique counting problem, we settle for parallel solutions in the MapReduce framework, which has become in the last few years a de facto standard for batch processing of massive datasets. We give both theoretical and experimental contributions. On the theory side, we design the first exact scalable algorithm for counting (and listing) k-cliques in MapReduce. Our algorithm uses O(m3/2) total space and O(mk/2) work, where m is the number of graph edges. This matches the best-known bounds for triangle listing when k e 3 and is work optimal in the worst case for any k, while keeping the communication cost independent of k. We also design sampling-based estimators that can dramatically reduce the running time and space requirements of the exact approach, while providing very accurate solutions with high probability. We then assess the effectiveness of different clique counting approaches through an extensive experimental analysis over the Amazon EC2 platform, considering both our algorithms and their state-of-the-art competitors. The experimental results clearly highlight the algorithm of choice in different scenarios and prove our exact approach to be the most effective when the number of k-cliques is large, gracefully scaling to nontrivial values of k even on clusters of small/medium size. Our approximation algorithms achieve extremely accurate estimates and large speedups, especially on the toughest instances for the exact algorithms. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> With the growing amount of available temporal real-world network data, an important question is how to efficiently study these data. One can simply model a temporal network as either a single aggregate static network, or as a series of time-specific snapshots, each of which is an aggregate static network over the corresponding time window. The advantage of modeling the temporal data in these two ways is that one can use existing well established methods for static network analysis to study the resulting aggregate network(s). Here, we develop a novel approach for studying temporal network data more explicitly. We base our methodology on the well established notion of graphlets (subgraphs), which have been successfully used in numerous contexts in static network research. Here, we take the notion of static graphlets to the next level and develop new theory needed to allow for graphlet-based analysis of temporal networks. Our new notion of dynamic graphlets is quite different than existing approaches for dynamic network analysis that are based on temporal motifs (statistically significant subgraphs). Namely, these approaches suffer from many limitations. For example, they can only deal with subgraph structures of limited complexity. Also, their major drawback is that their results heavily depend on the choice of a null network model that is required to evaluate the significance of a subgraph. However, choosing an appropriate null network model is a non-trivial task. Our dynamic graphlet approach overcomes the limitations of the existing temporal motif-based approaches. At the same time, when we thoroughly evaluate the ability of our new approach to characterize the structure and function of an entire temporal network or of individual nodes, we find that the dynamic graphlet approach outperforms the static graphlet approach, which indicates that accounting for temporal information helps. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Determining the occurrence of motifs yields profound insight for many biological systems, like metabolic, protein-protein interaction, and protein structure networks. Meaningful spatial protein-structure motifs include enzyme active sites and ligand-binding sites which are essential for function, shape, and performance of an enzyme. Analyzing their dynamics over time leads to a better understanding of underlying properties and processes. In this work, we present StreaM, a stream-based algorithm for counting undirected 4-vertex motifs in dynamic graphs. We evaluate StreaM against the four predominant approaches from the current state of the art on generated and real-world datasets, a simulation of a highly dynamic enzyme. For this case, we show that StreaM is capable to capture essential molecular protein dynamics and thereby provides a powerful method for evaluating large molecular dynamics trajectories. Compared to related work, our approach achieves speedups of upi¾?to 2,i¾?300 times on real-world datasets. <s> BIB008 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> We study the problem of estimating the value of sums of the form \(S_p \triangleq \sum \left( {\begin{array}{c}x_i\\ p\end{array}}\right) \) when one has the ability to sample \(x_i \ge 0\) with probability proportional to its magnitude. When \(p=2\), this problem is equivalent to estimating the selectivity of a self-join query in database systems when one can sample rows randomly. We also study the special case when \(\{x_i\}\) is the degree sequence of a graph, which corresponds to counting the number of p-stars in a graph when one has the ability to sample edges randomly. Our algorithm for a \((1 \pm \varepsilon )\)-multiplicative approximation of \(S_p\) has query and time complexities \(\mathrm{O}\left( \frac{m \log \log n}{\epsilon ^2 S_p^{1/p}}\right) \). Here, \(m=\sum x_i/2\) is the number of edges in the graph, or equivalently, half the number of records in the database table. Similarly, n is the number of vertices in the graph and the number of unique values in the database table. We also provide tight lower bounds (up to polylogarithmic factors) in almost all cases, even when \(\{x_i\}\) is a degree sequence and one is allowed to use the structure of the graph to try to get a better estimate. We are not aware of any prior lower bounds on the problem of join selectivity estimation. For the graph problem, prior work which assumed the ability to sample only vertices uniformly gave algorithms with matching lower bounds (Gonen et al. in SIAM J Comput 25:1365–1411, 2011). With the ability to sample edges randomly, we show that one can achieve faster algorithms for approximating the number of star subgraphs, bypassing the lower bounds in this prior work. For example, in the regime where \(S_p\le n\), and \(p=2\), our upper bound is \(\tilde{O}(n/S_p^{1/2})\), in contrast to their \(\varOmega (n/S_p^{1/3})\) lower bound when no random edge queries are available. In addition, we consider the problem of counting the number of directed paths of length two when the graph is directed. This problem is equivalent to estimating the selectivity of a join query between two distinct tables. We prove that the general version of this problem cannot be solved in sublinear time. However, when the ratio between in-degree and out-degree is bounded—or equivalently, when the ratio between the number of occurrences of values in the two columns being joined is bounded—we give a sublinear time algorithm via a reduction to the undirected case. <s> BIB009 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> In recent years, graphlet counting has emerged as an important task in topological graph analysis. However, the existing works on graphlet counting obtain the graphlet counts for the entire network as a whole. These works capture the key graphical patterns that prevail in a given network but they fail to meet the demand of the majority of real-life graph related prediction tasks such as link prediction, edge/node classification, etc., which require to build features for an edge (or a vertex) of a network. To meet the demand for such applications, efficient algorithms are needed for counting local graphlets within the context of an edge (or a vertex). In this work, we propose an efficient method, titled E-CLOG, for counting all 3,4 and 5 size local graphlets with the context of a given edge for its all different edge orbits. We also provide a shared-memory, multi-core implementation of E-CLOG, which makes it even more scalable for very large real-world networks. In particular, We obtain strong scaling on a variety of graphs (14x-20x on 36 cores). We provide extensive experimental results to demonstrate the efficiency and effectiveness of the proposed method. For instance, we show that E-CLOG is faster than existing work by multiple order of magnitudes; for the Wordnet graph E-CLOG counts all 3,4 and 5-size local graphlets in 1.5 hours using a single thread and in only a few minutes using the parallel implementation, whereas the baseline method does not finish in more than 4 days. We also show that local graphlet counts around an edge are much better features for link prediction than well-known topological features; our experiments show that the former enjoys between 10% to 45% of improvement in the AUC value for predicting future links in three real-life social and collaboration networks. <s> BIB010 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Networks are a fundamental tool for modeling complex systems in a variety of domains including social and communication networks as well as biology and neuroscience. The counts of small subgraph patterns in networks, called network motifs, are crucial to understanding the structure and function of these systems. However, the role of network motifs for temporal networks, which contain many timestamped links between nodes, is not well understood. Here we develop a notion of a temporal network motif as an elementary unit of temporal networks and provide a general methodology for counting such motifs. We define temporal network motifs as induced subgraphs on sequences of edges, design several fast algorithms for counting temporal network motifs, and prove their runtime complexity. We also show that our fast algorithms achieve 1.3x to 56.5x speedups compared to a baseline method. We use our algorithms to count temporal network motifs in a variety of real-world datasets. Results show that networks from different domains have significantly different motif frequencies, whereas networks from the same domain tend to have similar motif frequencies. We also find that measuring motif counts at various time scales reveals different behavior. <s> BIB011 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> In order to detect network motifs we need to evaluate the exceptionality of subgraphs in a given network. This is usually done by comparing subgraph frequencies on both the original and an ensemble of random networks keeping certain structural properties. The classical null model implies preserving the degree sequence. In this paper our focus is on a richer model that approximately fixes the frequency of subgraphs of size \(K - 1\) to compute motifs of size K. We propose a method for generating random graphs under this model, and we provide algorithms for its efficient computation. We show empirical results of our proposed methodology on neurobiological networks, showcasing its efficiency and its differences when comparing to the traditional null model. <s> BIB012 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Motivated by recent studies in the data mining community, we develop the most efficient parallel algorithm for listing all k-cliques in a graph. Our theoretical analysis shows that our algorithm boasts the best asymptotic upper bound on the running time for the case when the input graph is sparse. Our experimental evaluation on large real-world graphs demonstrates that our parallel algorithm is faster than state-of-the-art algorithms, while boasting an excellent degree of parallelism. In particular, we are able to list all k-cliques (for any value of k) in graphs containing up to tens of millions of edges as well as all 10-cliques in graphs containing billions of edges, within a few minutes and a few hours respectively. We show how it can be employed as an effective subroutine for finding the k-clique core decomposition and an approximate k-clique densest subgraphs in very large real-world graphs. <s> BIB013 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> The frequency of small subtrees in biological, social, and other types of networks could shed light into the structure, function, and evolution of such networks. However, counting all possible subtrees of a prescribed size can be computationally expensive because of their potentially large number even in small, sparse networks. Moreover, most of the existing algorithms for subtree counting belong to the subtree-centric approaches, which search for a specific single subtree type at a time, potentially taking more time by searching again on the same network. In this paper, we propose a network-centric algorithm (MTMO) to efficiently count k-size subtrees. Our algorithm is based on the enumeration of all connected sets of k–1 edges, incorporates a labeled rooted tree data structure in the enumeration process to reduce the number of isomorphism tests required, and uses an array-based indexing scheme to simplify the subtree counting method. The experiments on three representative undirected complex networks show that our algorithm is roughly an order of magnitude faster than existing subtree-centric approaches and base network-centric algorithm which does not use rooted tree, allowing for counting larger subtrees in larger networks than previously possible. We also show major differences between unicellular and multicellular organisms. In addition, our algorithm is applied to find network motifs based on pattern growth approach. A network-centric algorithm which allows for a faster counting of non-induced subtrees is proposed. This enables us to count larger motif in larger networks than previously. <s> BIB014 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> We consider the problem of counting motifs in bipartite affiliation networks, such as author-paper, user-product, and actor-movie relations. We focus on counting the number of occurrences of a "butterfly", a complete 2x2 biclique, the simplest cohesive higher-order structure in a bipartite graph. Our main contribution is a suite of randomized algorithms that can quickly approximate the number of butterflies in a graph with a provable guarantee on accuracy. An experimental evaluation on large real-world networks shows that our algorithms return accurate estimates within a few seconds, even for networks with trillions of butterflies and hundreds of millions of edges. <s> BIB015 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Network alignment (NA) compares networks with the goal of finding a node mapping that uncovers highly similar (conserved) network regions. Existing NA methods are homogeneous, i.e., they can deal only with networks containing nodes and edges of one type. Due to increasing amounts of heterogeneous network data with nodes or edges of different types, we extend three recent state-of-the-art homogeneous NA methods, WAVE, MAGNA++, and SANA, to allow for heterogeneous NA for the first time. We introduce several algorithmic novelties. Namely, these existing methods compute homogeneous graphlet-based node similarities and then find high-scoring alignments with respect to these similarities, while simultaneously maximizing the amount of conserved edges. Instead, we extend homogeneous graphlets to their heterogeneous counterparts, which we then use to develop a new measure of heterogeneous node similarity. Also, we extend $S^3$, a state-of-the-art measure of edge conservation for homogeneous NA, to its heterogeneous counterpart. Then, we find high-scoring alignments with respect to our heterogeneous node similarity and edge conservation measures. In evaluations on synthetic and real-world biological networks, our proposed heterogeneous NA methods lead to higher-quality alignments and better robustness to noise in the data than their homogeneous counterparts. The software and data from this work is available upon request. <s> BIB016 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Motif discovery is the problem of finding subgraphs of a network that appear surprisingly often. Each such subgraph may indicate a small-scale interaction feature in applications ranging from a genomic interaction network, a significant relationship involving rock musicians, or any other application that can be represented as a network. We look at the problem of constrained search for motifs based on labels (e.g. gene ontology, musician type to continue our example from above). This chapter presents a brief review of the state of the art in motif finding and then extends the gTrie data structure from Ribeiro and Silva (Data Min Knowl Discov 28(2):337–377, 2014b) to support labels. Experiments validate the usefulness of our structure for small subgraphs, showing that we recoup the cost of the index after only a handful of queries. <s> BIB017 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Given a set of temporal networks, from different domains and with different sizes, how can we compare them? Can we identify evolutionary patterns that are both (i) characteristic and (ii) meaningful? We address these challenges by introducing a novel temporal and topological network fingerprint named Graphlet-orbit Transitions (GoT). We demonstrate that GoT provides very rich and interpretable network characterizations. Our work puts forward an extension of graphlets and uses the notion of orbits to encapsulate the roles of nodes in each subgraph. We build a transition matrix that keeps track of the temporal trajectory of nodes in terms of their orbits, therefore describing their evolution. We also introduce a metric (OTA) to compare two networks when considering these matrices. Our experiments show that networks representing similar systems have characteristic orbit transitions. GoT correctly groups synthetic networks pertaining to well-known graph models more accurately than competing static and dynamic state-of-the-art approaches by over 30%. Furthermore, our tests on real-world networks show that GoT produces highly interpretable results, which we use to provide insight into characteristic orbit transitions. <s> BIB018 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> MOTIVATION ::: Graphlets are small network patterns that can be counted in order to characterise the structure of a network (topology). As part of a topology optimisation process, one could use graphlet counts to iteratively modify a network and keep track of the graphlet counts, in order to achieve certain topological properties. Up until now, however, graphlets were not suited as a metric for performing topology optimisation; when millions of minor changes are made to the network structure it becomes computationally intractable to recalculate all the graphlet counts for each of the edge modifications. ::: ::: ::: RESULTS ::: IncGraph is a method for calculating the differences in graphlet counts with respect to the network in its previous state, which is much more efficient than calculating the graphlet occurrences from scratch at every edge modification made. In comparison to static counting approaches, our findings show IncGraph reduces the execution time by several orders of magnitude. The usefulness of this approach was demonstrated by developing a graphlet-based metric to optimise gene regulatory networks. IncGraph is able to quickly quantify the topological impact of small changes to a network, which opens novel research opportunities to study changes in topologies in evolving or online networks, or develop graphlet-based criteria for topology optimisation. ::: ::: ::: AVAILABILITY ::: IncGraph is freely available as an open-source R package on CRAN (incgraph). The development version is also available on GitHub (rcannood/incgraph). <s> BIB019 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Subgraph counting is a fundamental primitive in graph processing, with applications in social network analysis (e.g., estimating the clustering coefficient of a graph), database processing and other areas. The space complexity of subgraph counting has been studied extensively in the literature, but many natural settings are still not well understood. In this paper we revisit the subgraph (and hypergraph) counting problem in the sketching model, where the algorithm's state as it processes a stream of updates to the graph is a linear function of the stream. This model has recently received a lot of attention in the literature, and has become a standard model for solving dynamic graph streaming problems. In this paper we give a tight bound on the sketching complexity of counting the number of occurrences of a small subgraph $H$ in a bounded degree graph $G$ presented as a stream of edge updates. Specifically, we show that the space complexity of the problem is governed by the fractional vertex cover number of the graph $H$. Our subgraph counting algorithm implements a natural vertex sampling approach, with sampling probabilities governed by the vertex cover of $H$. Our main technical contribution lies in a new set of Fourier analytic tools that we develop to analyze multiplayer communication protocols in the simultaneous communication model, allowing us to prove a tight lower bound. We believe that our techniques are likely to find applications in other settings. Besides giving tight bounds for all graphs $H$, both our algorithm and lower bounds extend to the hypergraph setting, albeit with some loss in space complexity. <s> BIB020 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Many real-world applications give rise to large heterogeneous networks where nodes and edges can be of any arbitrary type (e.g., user, web page, location). Special cases of such heterogeneous graphs include homogeneous graphs, bipartite, k-partite, signed, labeled graphs, among many others. In this work, we generalize the notion of network motifs to heterogeneous networks. In particular, small induced typed subgraphs called typed graphlets (heterogeneous network motifs) are introduced and shown to be the fundamental building blocks of complex heterogeneous networks. Typed graphlets are a powerful generalization of the notion of graphlet (network motif) to heterogeneous networks as they capture both the induced subgraph of interest and the types associated with the nodes in the induced subgraph. To address this problem, we propose a fast, parallel, and space-efficient framework for counting typed graphlets in large networks. We discover the existence of non-trivial combinatorial relationships between lower-order ($k-1$)-node typed graphlets and leverage them for deriving many of the $k$-node typed graphlets in $o(1)$ constant time. Thus, we avoid explicit enumeration of those typed graphlets. Notably, the time complexity matches the best untyped graphlet counting algorithm. The experiments demonstrate the effectiveness of the proposed framework in terms of runtime, space-efficiency, parallel speedup, and scalability as it is able to handle large-scale networks. <s> BIB021 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> This paper proposes novel algorithms for efficiently counting complex network motifs in dynamic networks that are changing over time. Network motifs are small characteristic configurations of a few nodes and edges, and have repeatedly been shown to provide insightful information for understanding the meso-level structure of a network. Here, we deal with counting more complex temporal motifs in large-scale networks that may consist of millions of nodes and edges. The first contribution is an efficient approach to count temporal motifs in multilayer networks and networks with partial timing, two prevalent aspects of many real-world complex networks. We analyze the complexity of these algorithms and empirically validate their performance on a number of real-world user communication networks extracted from online knowledge exchange platforms. Among other things, we find that the multilayer aspects provide significant insights in how complex user interaction patterns differ substantially between online platforms. The second contribution is an analysis of the viability of motif counting algorithms for motifs that are larger than the triad motifs studied in previous work. We provide a novel categorization of motifs of size four, and determine how and at what computational cost these motifs can still be counted efficiently. In doing so, we delineate the “computational frontier” of temporal motif counting algorithms. <s> BIB022 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Algorithms Not Considered <s> Biological networks provide great potential to understand how cells function. Motifs are topological patterns which are repeated frequently in a specific network. Network motifs are key structures through which biological networks operate. However, counting independent (i.e., non-overlapping) instances of a specific motif remains to be a computationally hard problem. Motif counting problem becomes computationally even harder for biological networks as biological interactions are uncertain events. The main challenge behind this problem is that different embeddings of a given motif in a network can share edges. Such edges can create complex computational dependencies between different instances of the given motif when considering uncertainty of those edges. In this paper, we develop a novel algorithm for counting independent instances of a specific motif topology in probabilistic biological networks. We present a novel mathematical model to capture the dependency between each embedding and all the other embeddings, which it overlaps with. We prove the correctness of this model. We evaluate our model on real and synthetic networks with different probability, and topology models as well as reasonable range of network sizes. Our results demonstrate that our method counts non-overlapping embeddings in practical time for a broad range of networks. <s> BIB023
In this work we focus on practical algorithms that are capable of counting all subgraphs of a given size. Therefore, algorithms that only target specific subgraphs are not considered (e.g., triads , cliques BIB009 BIB006 , stars BIB013 or subtrees BIB014 ). Furthermore, given our focus on generalizability, we do not consider algorithms that are only capable of counting sugraphs in specific graphs (e.g., bipartite networks BIB015 , trees ), or that only count local subgraphs BIB010 . Graphs used throughout this work are simple, have a single layer of connectivity and do not distinguish the node or edge types with qualitative or quantitative features. Therefore we do not discuss here algorithms that use colored nodes or edges BIB004 BIB002 BIB005 , and neither those that consider networks that are heterogeneous BIB016 BIB021 , multilayer BIB022 , labelled/attributed BIB017 , probabilistic BIB023 or any kind of weighted graphs BIB001 . Finally, the networks we consider are static and do not change their topology. We should however note that there has been an increasing interest in temporal networks, that evolve over time BIB003 . Some algorithms beyond the scope of this survey try to tackle temporal subgraph counting, either by considering temporal networks as a series of static snapshots BIB018 BIB007 , by timestamping edges BIB011 , or by considering a stream of small updates to the graph topology BIB019 BIB020 BIB008 BIB012 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Applications and Related Problems <s> We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small tree-width, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Applications and Related Problems <s> We report the current state of the graph isomorphism problem from the practical point of view. After describing the general principles of the refinement-individualization paradigm and proving its validity, we explain how it is implemented in several of the key programs. In particular, we bring the description of the best known program nauty up to date and describe an innovative approach called Traces that outperforms the competitors for many difficult graph classes. Detailed comparisons against saucy, Bliss and conauto are presented. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Applications and Related Problems <s> Determining the frequency of small subgraphs is an important computational task lying at the core of several graph mining methodologies, such as network motifs discovery or graphlet based measurements. In this paper we try to improve a class of algorithms available for this purpose, namely network-centric algorithms, which are based upon the enumeration of all sets of k connected nodes. Past approaches would essentially delay isomorphism tests until they had a finalized set of k nodes. In this paper we show how isomorphism testing can be done during the actual enumeration. We use a customized g-trie, a tree data structure, in order to encapsulate the topological information of the embedded subgraphs, identifying already known node permutations of the same subgraph type. With this we avoid redundancy and the need of an isomorphism test for each subgraph occurrence. We tested our algorithm, which we called FaSE, on a set of different real complex networks, both directed and undirected, showcasing that we indeed achieve significant speedups of at least one order of magnitude against past algorithms, paving the way for a faster network-centric approach. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Applications and Related Problems <s> The ability to find and count subgraphs of a given network is an important non trivial task with multidisciplinary applicability. Discovering network motifs or computing graphlet signatures are two examples of methodologies that at their core rely precisely on the subgraph counting problem. Here we present the g-trie, a data-structure specifically designed for discovering subgraph frequencies. We produce a tree that encapsulates the structure of the entire graph set, taking advantage of common topologies in the same way a prefix tree takes advantage of common prefixes. This avoids redundancy in the representation of the graphs, thus allowing for both memory and computation time savings. We introduce a specialized canonical labeling designed to highlight common substructures and annotate the g-trie with a set of conditional rules that break symmetries, avoiding repetitions in the computation. We introduce a novel algorithm that takes as input a set of small graphs and is able to efficiently find and count them as induced subgraphs of a larger network. We perform an extensive empirical evaluation of our algorithms, focusing on efficiency and scalability on a set of diversified complex networks. Results show that g-tries are able to clearly outperform previously existing algorithms by at least one order of magnitude. <s> BIB004
2.4.1 Subgraph Isomorphism. Given two graphs G and H , the subgraph isomorphism problem is the computational task of determining if G contains a subgraph isomorphic to H . Although efficient solutions might be found for specific graph types (e.g., linear solutions exist for planar graphs BIB001 ), this is a known NP-Complete problem for general graphs , and can be seen as much simpler version of counting, that is, determining if the number of occurrences is bigger than zero. This task is closely related to the graph isomorphism problem [107, BIB002 , that is, the task of determining if two given graphs are isomorphic. Since many subgraph counting approaches rely on finding the subgraphs contained in a large graph and then checking to what isomorphic class the subgraphs found belong to, subgraph isomorphism can be seen as an integral part of them. The well known and very fast nauty tool is used by several subgraph counting algorithms to assess the type of the subgraph found BIB003 BIB004 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> This chapter is part of a continuing research series and reports work that is collaborative in every respect. The order of our names on this and our previous reports is alphabetical. National Science Foundation Grants GS-39778 to Carnegie-Mellon University and GJ-1 154X2 to the National Bureau of Economic Research, Inc., provided financial support. We are grateful to James A. Davis, J. Richard Dietrich, and Christopher Winship for aid in conducting this research and to Richard Hill for computer programing. This chapter was written when Paul Holland was with the Computer Research Center for Economics and Management Science of the National Bureau of Economic Research, Inc. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> Triadic structure is an important, but neglected, aspect of interfirm networks. We developed the constructs clustering and countering as potential drivers of triadic structure and combined them with the recently developed p* network model to demonstrate the value and feasibility of triadic analysis. Exploratory analysis of data from the global steel industry revealed firms' tendency to form transitive triads, in which three firms all have direct ties with each other, especially within blocks defined by geography or technology. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> Modularity is known to be one of the most relevant characteristics of biological systems and appears to be present at multiple scales. Given its adaptive potential, it is often assumed to be the target of selective pressures. Under such interpretation, selection would be actively favouring the formation of modular structures, which would specialize in different functions. Here we show that, within the context of cellular networks, no such selection pressure is needed to obtain modularity. Instead, the intrinsic dynamics of network growth by duplication and diversification is able to generate it for free and explain the statistical features exhibited by small subgraphs. The implications for the evolution and evolvability of both biological and technological systems are discussed. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> Dyad and triad census summarize much of the network level structural information of a given directed network. They have been found very useful in analyzing structural properties of social networks. This study aims to explore crisis communication network by following dyad and triad census analysis approach to investigate the association of microlevel communication patterns with organizational crisis. This study further tests hypothesis related to the process of data generation and tendency of the structural pattern of transitivity using dyad and triad census output. The changing communication network at Enron Corporation during the period of its crisis is analyzed in this study. Significant differences in the presence of different isomorphism classes or microlevel patterns of both dyad and triad census are noticed in crisis and non-crisis period network of Enron email corpus. It is also noticed that crisis communication network shows more transitivity compared to the non-crisis communication network. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph <s> The social role of a participant in a social system conceptualizes the circumstances under which she chooses to interact with others, making their discovery and analysis important for theoretical and practical purposes. In this paper, we propose a methodology to detect such roles by utilizing the conditional triad censuses of ego-networks. These censuses are a promising tool for social role extraction because they capture the degree to which basic social forces push upon a user to interact with others in a system. Clusters of triad censuses, inferred from network samples that preserve local structural properties, define the social roles. The approach is demonstrated on two large online interaction networks. <s> BIB005
Frequencies. The small patterns found in large graphs can offer insights about the networks. By considering the frequency of all k-subgraphs, we have a very powerful and rich feature vector that characterizes the network. There has been a long tradition on using the triad census on the analysis of social networks , and they have been used as early as in the 70s to describe local structure BIB001 . Examples of applications in this field include studying social capital features such as brokerage and closure , discovering social roles BIB005 , seeing the effect of individual psychological differences on network structure or characterizing communication BIB004 and social networks . Given the ubiquity of graphs, these frequencies have also been used on many other domains, such as in biological BIB003 , transportation or interfirm networks BIB002 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Complex brains have evolved a highly efficient network architecture whose structural connectivity is capable of generating a large repertoire of functional states. We detect characteristic network building blocks (structural and functional motifs) in neuroanatomical data sets and identify a small set of structural motifs that occur in significantly increased numbers. Our analysis suggests the hypothesis that brain networks maximize both the number and the diversity of functional motifs, while the repertoire of structural motifs remains small. Using functional motif number as a cost function in an optimization algorithm, we obtain network topologies that resemble real brain networks across a broad spectrum of structural measures, including small-world attributes. These results are consistent with the hypothesis that highly evolved neural architectures are organized to maximize functional repertoires and to support highly efficient integration of information. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Genes and proteins generate molecular circuitry that enables the cell to process information and respond to stimuli. A major challenge is to identify characteristic patterns in this network of interactions that may shed light on basic cellular mechanisms. Previous studies have analyzed aspects of this network, concentrating on either transcription-regulation or protein-protein interactions. Here we search for composite network motifs: characteristic network patterns consisting of both transcription-regulation and protein-protein interactions that recur significantly more often than in random networks. To this end we developed algorithms for detecting motifs in networks with two or more types of interactions and applied them to an integrated data set of protein-protein interactions and transcription regulation in Saccharomyces cerevisiae. We found a two-protein mixed-feedback loop motif, five types of three-protein motifs exhibiting coregulation and complex formation, and many motifs involving four proteins. Virtually all four-protein motifs consisted of combinations of smaller motifs. This study presents a basic framework for detecting the building blocks of networks with multiple types of interactions. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Summary: Biological and engineered networks have recently been shown to display network motifs: a small set of characteristic patterns that occur much more frequently than in randomized networks with the same degree sequence. Network motifs were demonstrated to play key information processing roles in biological regulation networks. Existing algorithms for detecting network motifs act by exhaustively enumerating all subgraphs with a given number of nodes in the network. The runtime of such algorithms increases strongly with network size. Here, we present a novel algorithm that allows estimation of subgraph concentrations and detection of network motifs at a runtime that is asymptotically independent of the network size. This algorithm is based on random sampling of subgraphs. Network motifs are detected with a surprisingly small number of samples in a wide variety of networks. Our method can be applied to estimate the concentrations of larger subgraphs in larger networks than was previously possible with exhaustive enumeration algorithms. We present results for high-order motifs in several biological networks and discuss their possible functions. ::: ::: Availability: A software tool for estimating subgraph concentrations and detecting network motifs (mfinder 1.1) and further information is available at http://www.weizmann.ac.il/mcb/UriAlon/ <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> There are two common approaches to food webs. On the one hand, empirical studies have described aggregate statistical measures of many-species food webs. On the other hand, theoretical studies have explored the dynamic properties of simple tri-trophic food chains (i.e., trophic modules). The question remains to what extent results based on simple modules are relevant for whole food webs. Here we bridge between these two independent research agendas by exploring the relative frequency of different trophic modules in the five most resolved food webs. While apparent competition and intraguild predation are overrepresented when compared to a suite of null models, the frequency of omnivory highly varies across communities. Inferences about the representation of modules may also depend on the null model used for statistical significance. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Complex networks in both nature and technology have been shown to display characteristic, small subgraphs so-called motifs which appear to be related to their underlying functionality. All these networks share a common trait: they manipulate information at different scales in order to perform some kind of computation. Here we analyze a large set of software class diagrams and show that several highly frequent network motifs appear to be a consequence of network heterogeneity and size, thus suggesting a somewhat less relevant role of functionality. However, by using a simple model of network growth by duplication and rewiring, it is shown the rules of graph evolution seem to be largely responsible for the observed motif distribution. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Getting and analyzing biological interaction networks is at the core of systems biology. To help understanding these complex networks, many recent works have suggested to focus on motifs which occur more frequently than expected in random. To identify such exceptional motifs in a given network, we propose a statistical and analytical method which does not require any simulation. For this, we first provide an analytical expression of the mean and variance of the count under any exchangeable random graph model. Then we approximate the motif count distribution by a compound Poisson distribution whose parameters are derived from the mean and variance of the count. Thanks to simulations, we show that the compound Poisson approximation outperforms the Gaussian approximation. The compound Poisson distribution can then be used to get an approximate p-value and to decide if an observed count is significantly high or not. Our methodology is applied on protein-protein interaction (PPI) networks, and statistical issues related to exceptional motif detection are discussed. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Various methods have been recently employed to characterise the structure of biological networks. In particular, the concept of network motif and the related one of coloured motif have proven useful to model the notion of a functional/evolutionary building block. However, algorithms that enumerate all the motifs of a network may produce a very large output, and methods to decide which motifs should be selected for downstream analysis are needed. A widely used method is to assess if the motif is exceptional, that is, over- or under-represented with respect to a null hypothesis. Much effort has been put in the last thirty years to derive -values for the frequencies of topological motifs, that is, fixed subgraphs. They rely either on (compound) Poisson and Gaussian approximations for the motif count distribution in Erdös-Rényi random graphs or on simulations in other models. We focus on a different definition of graph motifs that corresponds to coloured motifs. A coloured motif is a connected subgraph with fixed vertex colours but unspecified topology. Our work is the first analytical attempt to assess the exceptionality of coloured motifs in networks without any simulation. We first establish analytical formulae for the mean and the variance of the count of a coloured motif in an Erdös-Rényi random graph model. Using simulations under this model, we further show that a Pólya-Aeppli distribution better approximates the distribution of the motif count compared to Gaussian or Poisson distributions. The Pólya-Aeppli distribution, and more generally the compound Poisson distributions, are indeed well designed to model counts of clumping events. Altogether, these results enable to derive a -value for a coloured motif, without spending time on simulations. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> In recent years, interest has been growing in the study of complex networks. Since Erdös and Rényi (1960) proposed their random graph model about 50 years ago, many researchers have investigated and shaped this field. Many indicators have been proposed to assess the global features of networks. Recently, an active research area has developed in studying local features named motifs as the building blocks of networks. Unfortunately, network motif discovery is a computationally hard problem and finding rather large motifs (larger than 8 nodes) by means of current algorithms is impractical as it demands too much computational effort. In this paper, we present a new algorithm (MODA) that incorporates techniques such as a pattern growth approach for extracting larger motifs efficiently. We have tested our algorithm and found it able to identify larger motifs with more than 8 nodes more efficiently than most of the current state-of-the-art motif discovery algorithms. While most of the algorithms rely on induced subgraphs as motifs of the networks, MODA is able to extract both induced and non-induced subgraphs simultaneously. The MODA source code is freely available at: http://LBB.ut.ac.ir/Download/LBBsoft/MODA/ <s> BIB008 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Network motifs are statistically overrepresented sub-structures (sub-graphs) in a network, and have been recognized as ‘the simple building blocks of complex networks’. Study of biological network motifs may reveal answers to many important biological questions. The main difficulty in detecting larger network motifs in biological networks lies in the facts that the number of possible sub-graphs increases exponentially with the network or motif size (node counts, in general), and that no known polynomial-time algorithm exists in deciding if two graphs are topologically equivalent. This article discusses the biological significance of network motifs, the motivation behind solving the motif-finding problem, and strategies to solve the various aspects of this problem. A simple classification scheme is designed to analyze the strengths and weaknesses of several existing algorithms. Experimental results derived from a few comparative studies in the literature are discussed, with conclusions that lead to future research directions. <s> BIB009 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> We study complex networks in which the nodes are tagged with different colors depending on their function (colored graphs), using information theory applied to the distribution of motifs in such networks. We find that colored motifs can be viewed as the building blocks of the networks (much more than the uncolored structural motifs can be) and that the relative frequency with which these motifs appear in the network can be used to define its information content. This information is defined in such a way that a network with random coloration (but keeping the relative number of nodes with different colors the same) has zero color information content. Thus, colored motif information captures the exceptionality of coloring in the motifs that is maintained via selection. We study the motif information content of the C. elegans brain as well as the evolution of colored motif information in networks that reflect the interaction between instructions in genomes of digital life organisms. While we find that colored motif information appears to capture essential functionality in the C. elegans brain (where the color assignment of nodes is straightforward), it is not obvious whether the colored motif information content always increases during evolution, as would be expected from a measure that captures network complexity. For a single choice of color assignment of instructions in the digital life form Avida, we find rather that colored motif information content increases or decreases during evolution, depending on how the genomes are organized, and therefore could be an interesting tool to dissect genomic rearrangements. <s> BIB010 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Comparing scientific production across different fields of knowledge is commonly controversial and subject to disagreement. Such comparisons are often based on quantitative indicators, such as papers per researcher, and data normalization is very difficult to accomplish. Different approaches can provide new insight and in this paper we focus on the comparison of different scientific fields based on their research collaboration networks. We use co-authorship networks where nodes are researchers and the edges show the existing co-authorship relations between them. Our comparison methodology is based on network motifs, which are over represented patterns, or sub graphs. We derive motif fingerprints for 22 scientific fields based on 29 different small motifs found in the corresponding co-authorship networks. These fingerprints provide a metric for assessing similarity among scientific fields, and our analysis shows that the discrimination power of the 29 motif types is not identical. We use a co-authorship dataset built from over 15,361 publications inducing a co-authorship network with over 32,842 researchers. Our results also show that we can group different fields according to their fingerprints, supporting the notion that some fields present higher similarity and can be more easily compared. <s> BIB011 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> A motif in a network is a connected graph that occurs significantly more frequently as an induced subgraph than would be expected in a similar randomized network. By virtue of being atypical, it is thought that motifs might play a more important role than arbitrary subgraphs. Recently, a flurry of advances in the study of network motifs has created demand for faster computational means for identifying motifs in increasingly larger networks. Motif detection is typically performed by enumerating subgraphs in an input network and in an ensemble of comparison networks; this poses a significant computational problem. Classifying the subgraphs encountered, for instance, is typically performed using a graph canonical labeling package, such as Nauty, and will typically be called billions of times. In this article, we describe an implementation of a network motif detection package, which we call NetMODE. NetMODE can only perform motif detection for -node subgraphs when , but does so without the use of Nauty. To avoid using Nauty, NetMODE has an initial pretreatment phase, where -node graph data is stored in memory (). For we take a novel approach, which relates to the Reconstruction Conjecture for directed graphs. We find that NetMODE can perform up to around times faster than its predecessors when and up to around times faster when (the exact improvement varies considerably). NetMODE also (a) includes a method for generating comparison graphs uniformly at random, (b) can interface with external packages (e.g. R), and (c) can utilize multi-core architectures. NetMODE is available from netmode.sf.net. <s> BIB012 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Unexpectedly frequent sub graphs, known as motifs, can help in characterizing the structure of complex networks. Most of the existing methods for finding motifs are designed for unweighted networks, where only the existence of connection between nodes is considered, and not their strength or capacity. However, in many real world networks, edges contain more information than just simple node connectivity. In this paper, we propose a new method to incorporate edge weight information in motif mining. We think of a motif as a sub graph that contains unexpected information, and we define a new significance measurement to assess this sub graph exceptionality. The proposed metric embeds the weight distribution in sub graphs and it is based on weight entropy. We use the g-trie data structure to find instances of $k$-sized sub graphs and to calculate its significance score. Following a statistical approach, the random entropy of sub graphs is then calculated, avoiding the time consuming step of random network generation. The discrimination power of the derived motif profile by the proposed method is assessed against the results of the traditional unweighted motifs through a graph classification problem. We use a set of labeled ego networks of co-authorship in the biology and mathematics fields, The new proposed method is shown to be feasible, achieving even slightly better accuracy. Furthermore, we are able to be quicker by not having to generate random networks, and we are able to use the weight information in computing the motif importance, avoiding the need for converting weighted networks into unweighted ones. <s> BIB013 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Complex networks facilitate the understanding of natural and man-made processes and are classified based on the concepts they model: biological, technological, social or semantic. The relevant subgraphs in these networks, called network motifs, are demonstrated to show core aspects of network functionality. They are used to classify complex networks based on that functionality. We propose a novel approach of classifying complex networks based on their topological aspects using motifs. We define the classifiers for regular, random, small-world and scale-free topologies, as well as apply this classification on empirical networks. The study brings a new perspective on how we can classify and differentiate online social networks like Facebook, Twitter and Google Plus based on the distribution of network motifs over the fundamental network topology classes. <s> BIB014 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Many real world networks contain a statistically surprising number of certain subgraphs, called network motifs. In the prevalent approach to motif analysis, network motifs are detected by comparing subgraph frequencies in the original network with a statistical null model. In this paper we propose an alternative approach to motif analysis where network motifs are defined to be connectivity patterns that occur in a subgraph cover that represents the network using minimal total information. A subgraph cover is defined to be a set of subgraphs such that every edge of the graph is contained in at least one of the subgraphs in the cover. Some recently introduced random graph models that can incorporate significant densities of motifs have natural formulations in terms of subgraph covers and the presented approach can be used to match networks with such models. To prove the practical value of our approach we also present a heuristic for the resulting NP-hard optimization problem and give results for several real world networks. <s> BIB015 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Network motif discovery is the problem of finding subgraphs of a network that occur more frequently than expected, according to some reasonable null hypothesis. Such subgraphs may indicate small scale interaction features in genomic interaction networks or intriguing relationships involving actors or a relationship among airlines. When nodes are labeled, they can carry information such as the genomic entity under study or the dominant genre of an actor. For that reason, labeled subgraphs convey information beyond structure and could therefore enjoy more applications. To identify statistically significant motifs in a given network, we propose an analytical method (i.e. simulation-free) that extends the works of Picard et al. (J Comput Biol 15(1):1---20, 2008) and Schbath et al. (J Bioinform Syst Biol 2009(1):616234, 2009) to label-dependent scale-free graph models. We provide an analytical expression of the mean and variance of the count under the Expected Degree Distribution random graph model. Our model deals with both induced and non-induced motifs. We have tested our methodology on a wide set of graphs ranging from protein---protein interaction networks to movie networks. The analytical model is a fast (usually faster by orders of magnitude) alternative to simulation. This advantage increases as graphs grow in size. <s> BIB016 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> We introduce a new learning method for network motifs: interesting or informative subgraph patterns in a network. Current methods for finding motifs rely on the frequency of the motif: specifically, subgraphs are motifs when their frequency in the data is high compared to the expected frequency under a null model. To compute this expectation, the search for motifs is normally repeated on as many as 1000 random graphs sampled from the null model, a prohibitively expensive step. We use ideas from the Minimum Description Length (MDL) literature to define a new measure of motif relevance. This has several advantages: the subgraph count on samples from the null model can be eliminated, and the search for motif candidates within the data itself can be greatly simplified. Our method allows motif analysis to scale to networks with billions of links, provided that a fast null model is used. <s> BIB017 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Software homology plays an important role in intellectual property protection, malware analysis, and network attack traceback. Among many methods proposed by researchers, the structure-based method has been proved to have better detection and anti-obfuscation capabilities, but it is inefficiency on space-time complexity and difficult to be applied to large-scale software homology analysis. In this paper, we propose a parallel method to extract function call graph from source codes, and a new software structure information comparison algorithm. The approach transforms function call graph into the corresponding motifs as the features of the software, and calculates homology score by the algorithm which is quick and accurate for large-scale software based on software motifs. According to experiments on large-scale source codes, binary executable files and obfuscated software, the accuracy of homology detection is 90.00% for non-obfuscated software and 80.00% for obfuscated software. <s> BIB018 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Co-regulatory networks, which consist of transcription factors (TFs), micro ribose nucleic acids (miRNAs), and target genes, have provided new insight into biological processes, revealing complicated and comprehensive regulatory relationships between biomolecules. To uncover the key co-regulatory mechanisms between these biomolecules, the identification of co-regulatory motifs has become beneficial. However, due to high-computational complexity, it is a hard task to identify co-regulatory network motifs with more than four interacting nodes in large-scale co-regulatory networks. To overcome this limitation, we propose an efficient algorithm, named large co-regulatory network motif (LCNM), to detect large co-regulatory network motifs. This algorithm is able to store a set of co-regulatory network motifs within a $G$ -tries structure. Moreover, we propose two ways to generate candidate motifs. For three- or four-interacting-node motifs, LCNM is able to generate all different types of motif through an enumeration method. For larger network motifs, we adopt a sampling method to generate candidate co-regulatory motifs. The experimental results demonstrate that LCNM cannot only improve the computational performance in exhaustive identification of all of the three- or four-node motifs but can also identify co-regulatory network motifs with a maximum of eight nodes. In addition, we implement a parallel version of our LCNM algorithm to further accelerate the motif detection process. <s> BIB019 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Networks are powerful representation of topological features in biological systems like protein interaction and gene regulation. In order to understand the design principles of such complex networks, the concept of network motifs emerged. Network motifs are recurrent patterns with statistical significance that can be seen as basic building blocks of complex networks. Identification of network motifs leads to many important applications, such as understanding the modularity and the large-scale structure of biological networks, classification of networks into super-families, protein function annotation, etc. However, identification of network motifs is challenging as it involves graph isomorphism which is computationally hard. Though this problem has been studied extensively in the literature using different computational approaches, we are far from satisfactory results. Motivated by the challenges involved in this field, an efficient and scalable network Motif Discovery algorithm based on Expansion Tree (MODET) is proposed. Pattern growth approach is used in this proposed motif-centric algorithm. Each node of the expansion tree represents a non-isomorphic pattern. The embeddings corresponding to a child node of the expansion tree are obtained from the embeddings of the parent node through vertex addition and edge addition. Further, the proposed algorithm does not involve any graph isomorphism check and the time complexities of these processes are <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mi>O</mml:mi><mml:mo>(</mml:mo><mml:mi>n</mml:mi><mml:mo>)</mml:mo></mml:math> and <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mi>O</mml:mi><mml:mo>(</mml:mo><mml:mn>1</mml:mn><mml:mo>)</mml:mo></mml:math> , respectively. The proposed algorithm has been tested on Protein-Protein Interaction (PPI) network obtained from the MINT database. The computational efficiency of the proposed algorithm outperforms most of the existing network motif discovery algorithms. <s> BIB020 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> For scale-free networks with degrees following a power law with an exponent $\tau\in(2,3)$, the structures of motifs (small subgraphs) are not yet well understood. We introduce a method designed to identify the dominant structure of any given motif as the solution of an optimization problem. The unique optimizer describes the degrees of the vertices that together span the most likely motif, resulting in explicit asymptotic formulas for the motif count and its fluctuations. We then classify all motifs into two categories: motifs with small and large fluctuations. <s> BIB021 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Network Motifs. <s> Network motifs provide an enlightening insight into uncovering the structural design principles of complex networks across multifarious disciplines, such as physics, biology, social science, engineering, and military science. Measures for network motifs play an indispensable role in the procedures of motif measurement and evaluation which are crucial steps in motif detection, counting, and clustering. However, there is a relatively small body of literature concerned with measures for network motifs. In this paper, we review the measures for network motifs in two categories: structural measures and statistical measures. The application scenarios for each measure and the distinctions of measures in similar scenarios are also summarized. We also conclude the challenges for using these measures and put forward some future directions on this topic. Overall, the objective of this survey is to provide an overview of motif measures, which is anticipated to shed light on the theory and practice of complex networks. <s> BIB022
A subgraph is considered a network motif if it is somehow exceptional. Instead of simply using a frequency vector, motif based approaches construct a significance profile that associates an importance to each subgraph, typically related to how overrepresented it is. This concept first appeared in 2002 and it was first defined as subgraphs that occurred more often than expected when compared against a null model . The most common null model is to keep the degree sequence and with this we can obtain characteristic network fingerprints that have been shown to be very rich and capable of classifying networks into distinct superfamilies . Network motif analysis has since been in a vast range of applications, such as in the analysis of biological networks (e.g., brain BIB001 , regulation and protein interaction BIB002 or food webs BIB004 ), social networks (e.g., co-authorship BIB011 or online social networks BIB014 ), sports analytics (e.g., football passing ) or software networks (e.g., software architecture BIB005 or function-call graphs BIB018 ). In order to compute the significance profile of motifs in a graph G, most conceptual approaches rely on generating a large set of R(G) of similar randomized networks that serve as the desired null model. Thus, subgraph counting needs to be performed both on the original network and on the set of randomized networks. If the frequency of a subgraph S is significantly bigger in G than it its average frequency in R(G), we can consider S to be a network motif of G BIB003 . Other approaches try to avoid exhaustive generation of random networks and, thus, avoid also counting subgraphs on them, by following a more analytical approach capable of providing estimations of the expected frequencies (e.g., using an expected degree model BIB016 BIB006 BIB007 or a scale-free model BIB021 . Nevertheless, there is always the need of counting subgraphs in the original network. While network motifs are usually about induced subgraph occurrences BIB009 , there are some motif algorithms that count non-induced occurrences instead BIB012 BIB008 . Moreover, although most of the network motifs usages assume the previously mentioned statistical view on significance as overrepresentation, there are other possible approaches BIB022 such as using information theory concepts (e.g., motifs based on entropy BIB010 BIB013 , subgraph covers BIB015 , or minimum description length BIB017 ). We should also note that some approaches try to better navigate the space of "interesting" subgraphs, so that reaching larger motif sizes can be reached not by searching all possible larger k-subgraphs, but instead by leveraging computations of smaller motifs BIB019 BIB020 . Finally, we should note that several authors use the term motif to refer to small subgraphs, even when it does not imply any significance value beyond simple frequency on the original network.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Important biological information is encoded in the topology of biological networks. Comparative analyses of biological networks are proving to be valuable, as they can lead to transfer of knowledge between species and give deeper insights into biological function, disease, and evolution. We introduce a new method that uses the Hungarian algorithm to produce optimal global alignment between two networks using any cost function. We design a cost function based solely on network topology and use it in our network alignment. Our method can be applied to any two networks, not just biological ones, since it is based only on network topology. We use our new method to align protein-protein interaction networks of two eukaryotic species and demonstrate that our alignment exposes large and topologically complex regions of network similarity. At the same time, our alignment is biologically valid, since many of the aligned protein pairs perform the same biological function. From the alignment, we predict function of yet unannotated proteins, many of which we validate in the literature. Also, we apply our method to find topological similarities between metabolic networks of different species and build phylogenetic trees based on our network alignment score. The phylogenetic trees obtained in this way bear a striking resemblance to the ones obtained by sequence alignments. Our method detects topologically similar regions in large networks that are statistically significant. It does this independent of protein sequence or any other information external to network topology. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Sequence comparison and alignment has had an enormous impact on our understanding of evolution, biology and disease. Comparison and alignment of biological networks will probably have a similar impact. Existing network alignments use information external to the networks, such as sequence, because no good algorithm for purely topological alignment has yet been devised. In this paper, we present a novel algorithm based solely on network topology, that can be used to align any two networks. We apply it to biological networks to produce by far the most complete topological alignments of biological networks to date. We demonstrate that both species phylogeny and detailed biological function of individual proteins can be extracted from our alignments. Topology-based alignments have the potential to provide a completely new, independent source of phylogenetic information. Our alignment of the protein-protein interaction networks of two very different species-yeast and human-indicate that even distant species share a surprising amount of network topology, suggesting broad similarities in internal cellular wiring across all life on Earth. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Motivation: High-throughput methods for detecting molecular interactions have produced large sets of biological network data with much more yet to come. Analogous to sequence alignment, efficient and reliable network alignment methods are expected to improve our understanding of biological systems. Unlike sequence alignment, network alignment is computationally intractable. Hence, devising efficient network alignment heuristics is currently a foremost challenge in computational biology. ::: ::: Results: We introduce a novel network alignment algorithm, called Matching-based Integrative GRAph ALigner (MI-GRAAL), which can integrate any number and type of similarity measures between network nodes (e.g. proteins), including, but not limited to, any topological network similarity measure, sequence similarity, functional similarity and structural similarity. Hence, we resolve the ties in similarity measures and find a combination of similarity measures yielding the largest contiguous (i.e. connected) and biologically sound alignments. MI-GRAAL exposes the largest functional, connected regions of protein–protein interaction (PPI) network similarity to date: surprisingly, it reveals that 77.7% of proteins in the baker's yeast high-confidence PPI network participate in such a subnetwork that is fully contained in the human high-confidence PPI network. This is the first demonstration that species as diverse as yeast and human contain so large, continuous regions of global network similarity. We apply MI-GRAAL's alignments to predict functions of un-annotated proteins in yeast, human and bacteria validating our predictions in the literature. Furthermore, using network alignment scores for PPI networks of different herpes viruses, we reconstruct their phylogenetic relationship. This is the first time that phylogeny is exactly reconstructed from purely topological alignments of PPI networks. ::: ::: Availability: Supplementary files and MI-GRAAL executables: http://bio-nets.doc.ic.ac.uk/MI-GRAAL/. ::: ::: Contact: [email protected] ::: ::: Supplementary information:Supplementary data are available at Bioinformatics online. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Network alignment can be used to transfer functional knowledge between conserved regions of different networks. Existing methods use a node cost function (NCF) to compare nodes across networks and an alignment strategy (AS) to find high-scoring alignments with respect to total NCF over all aligned nodes (or node conservation). Then, they evaluate alignments via a measure that is different than node conservation used to guide alignment construction. Typically, one measures edge conservation, but only after alignments are produced. Hence, we recently directly maximized edge conservation while constructing alignments, which improved their quality. Here, we aim to maximize both node and edge conservation during alignment construction to further improve quality. We design a novel measure of edge conservation that (unlike existing measures that treat each conserved edge the same) weighs conserved edges to favor edges with highly NCF-similar end-nodes. As a result, we introduce a novel AS, Weighted Alignment VotEr (WAVE), which can optimize any measures of node and edge conservation. Using WAVE on top of well-established NCFs improves alignments compared to existing methods that optimize only node or edge conservation or treat each conserved edge the same. We evaluate WAVE on biological data, but it is applicable in any domain. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Motivation: Discovering and understanding patterns in networks of protein–protein interactions (PPIs) is a central problem in systems biology. Alignments between these networks aid functional understanding as they uncover important information, such as evolutionary conserved pathways, protein complexes and functional orthologs. A few methods have been proposed for global PPI network alignments, but because of NP-completeness of underlying sub-graph isomorphism problem, producing topologically and biologically accurate alignments remains a challenge. ::: ::: Results: We introduce a novel global network alignment tool, Lagrangian GRAphlet-based ALigner (L-GRAAL), which directly optimizes both the protein and the interaction functional conservations, using a novel alignment search heuristic based on integer programming and Lagrangian relaxation. We compare L-GRAAL with the state-of-the-art network aligners on the largest available PPI networks from BioGRID and observe that L-GRAAL uncovers the largest common sub-graphs between the networks, as measured by edge-correctness and symmetric sub-structures scores, which allow transferring more functional information across networks. We assess the biological quality of the protein mappings using the semantic similarity of their Gene Ontology annotations and observe that L-GRAAL best uncovers functionally conserved proteins. Furthermore, we introduce for the first time a measure of the semantic similarity of the mapped interactions and show that L-GRAAL also uncovers best functionally conserved interactions. In addition, we illustrate on the PPI networks of baker's yeast and human the ability of L-GRAAL to predict new PPIs. Finally, L-GRAAL's results are the first to show that topological information is more important than sequence information for uncovering functionally conserved interactions. ::: ::: Availability and implementation: L-GRAAL is coded in C++. Software is available at: http://bio-nets.doc.ic.ac.uk/L-GRAAL/. ::: ::: Contact: [email protected] ::: ::: Supplementary information: Supplementary data are available at Bioinformatics online. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 2.4.4 <s> Motivation ::: Network alignment (NA) finds conserved regions between two networks. NA methods optimize node conservation (NC) and edge conservation (EC). Dynamic graphlet degree vectors (DGDVs) are a state-of-the-art dynamic NC measure, used within the fastest and most accurante NA method for temporal networks: DynaWAVE. Here, we use graphlet-orbit transitions (GoTs), a different graphlet-based measure of temporal node similarity, as a new dynamic NC measure within DynaWAVE, resulting in GoT-WAVE. ::: ::: ::: Results ::: On synthetic networks, GoT-WAVE improves DynaWAVE's accuracy by 30% and speed by 64%. On real networks, when optimizing only dynamic NC, the methods are complementary. Furthermore, only GoT-WAVE supports directed edges. Hence, GoT-WAVE is a promising new temporal NA algorithm, which efficiently optimizes dynamic NC.We provide a user-friendly user interface and source code for GoT-WAVE. ::: ::: ::: Availability and implementation ::: http://www.dcc.fc.up.pt/got-wave/. <s> BIB006
Orbit-Aware Approaches and Network Alignment. When authors use the term graphlet, they commonly take orbits into consideration, and use metrics such as the graphlet-degree distribution (GDD, see details in section 2.1), a concept that appeared in 2007 . In this way, graphlet algorithms count how many times each node appears in each orbit. Unlike motifs, graphlets do not usually need a null model (i.e., networks are directly compared by comparing their respective GDDs). These orbit-aware distributions can be used for comparing networks. For instance, they have shown that protein interaction networks are more akin to random geometric graphs than to traditional scale-free networks . Moreover, they are also used to compare nodes (using graphlet-degree vectors). This makes them useful for network alignment tasks, where one needs to establish topological similarity between nodes from different networks BIB001 . Several graphletbased network alignment algorithms have been proposed and shown to work very well for aligning biological networks BIB006 BIB002 BIB003 BIB005 BIB004 ].
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Frequent Subgraph Mining (FSM) <s> We investigate new approaches for frequent graph-based pattern mining in graph datasets and propose a novel algorithm called gSpan (graph-based substructure pattern mining), which discovers frequent substructures without candidate generation. gSpan builds a new lexicographic order among graphs, and maps each graph to a unique minimum DFS code as its canonical label. Based on this lexicographic order gSpan adopts the depth-first search strategy to mine frequent connected subgraphs efficiently. Our performance study shows that gSpan substantially outperforms previous algorithms, sometimes by an order of magnitude. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Frequent Subgraph Mining (FSM) <s> Frequent subgraph mining is an active research topic in the data mining community. A graph is a general model to represent data and has been used in many domains like cheminformatics and bioinformatics. Mining patterns from graph databases is challenging since graph related operations, such as subgraph testing, generally have higher time complexity than the corresponding operations on itemsets, sequences, and trees, which have been studied extensively. We propose a novel frequent subgraph mining algorithm: FFSM, which employs a vertical search scheme within an algebraic graph framework we have developed to reduce the number of redundant candidates proposed. Our empirical study on synthetic and real datasets demonstrates that FFSM achieves a substantial performance gain over the current start-of-the-art subgraph mining algorithm gSpan. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Frequent Subgraph Mining (FSM) <s> Graph mining is an important research area within the domain of data mining. The field of study concentrates on the identification of frequent subgraphs within graph data sets. The research goals are directed at: (i) effective mechanisms for generating candidate subgraphs (without generating duplicates) and (ii) how best to process the generated candidate subgraphs so as to identify the desired frequent subgraphs in a way that is computationally efficient and procedurally effective. This paper presents a survey of current research in the field of frequent subgraph mining and proposes solutions to address the main research issues. <s> BIB003
. FSM algorithms find subgraphs that have a support higher than a given threshold. The most prevalent branch of FSM takes as input a bundle of networks and finds which subgraphs appear in a vast number of them -refereed to as graph transaction based FSM BIB003 . These algorithms BIB002 BIB001 heavily rely on the Downward Closure Property (DCP) to efficiently prune the search space. Algorithms for subgraph counting, which is our focus, can not, in general, rely on the DCP since it is not possible to know if growing an infrequent k-node subgraph will result, or not, in a frequent k + 1 subgraph. Furthermore, we are not only interested in frequent subgraphs but in all of them, since rare subgraphs can also give information about the network's topology. A less prominent branch of FSM, single graph based FSM, targets frequent subgraphs in a single large network, much like our subgraph counting problem. However, they adopt various support metrics that allow for the DCP to be verified, which, as stated previously, is not the case in the general subgraph counting problem BIB003 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Complex networks from domains like Biology or Sociology are present in many e-Science data sets. Dealing with networks can often form a workflow bottleneck as several related algorithms are computationally hard. One example is detecting characteristic patterns or "network motifs" - a problem involving subgraph mining and graph isomorphism. This paper provides a review and runtime comparison of current motif detection algorithms in the field. We present the strategies and the corresponding algorithms in pseudo-code yielding a framework for comparison. We categorize the algorithms outlining the main differences and advantages of each strategy. We finally implement all strategies in a common platform to allow a fair and objective efficiency comparison using a set of benchmark networks. We hope to inform the choice of strategy and critically discuss future improvements in motif detection. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motifs are statistically overrepresented sub-structures (sub-graphs) in a network, and have been recognized as ‘the simple building blocks of complex networks’. Study of biological network motifs may reveal answers to many important biological questions. The main difficulty in detecting larger network motifs in biological networks lies in the facts that the number of possible sub-graphs increases exponentially with the network or motif size (node counts, in general), and that no known polynomial-time algorithm exists in deciding if two graphs are topologically equivalent. This article discusses the biological significance of network motifs, the motivation behind solving the motif-finding problem, and strategies to solve the various aspects of this problem. A simple classification scheme is designed to analyze the strengths and weaknesses of several existing algorithms. Experimental results derived from a few comparative studies in the literature are discussed, with conclusions that lead to future research directions. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> In recent years, there has been a great interest in studying different aspects of complex networks in a range of fields. One important local property of networks is network motifs, recurrent and statistically significant sub-graphs or patterns, which assists researchers in the identification of functional units in the networks. Although network motifs may provide a deep insight into the network's functional abilities, their detection is computationally challenging. Therefore several algorithms have been introduced to resolve this computationally hard problem. These algorithms can be classified under various paradigms such as exact counting methods, sampling methods, pattern growth methods and so on. Here, the authors will give a review on computational aspects of major algorithms and enumerate their related benefits and drawbacks from an algorithmic perspective. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motif is defined as a frequent and unique subgraph pattern in a network, and the search involves counting all the possible instances or listing all patterns, testing isomorphism known as NP-hard and large amounts of repeated processes for statistical evaluation. Although many efficient algorithms have been introduced, exhaustive search methods are still infeasible and feasible approximation methods are yet implausible. Additionally, the fast and continual growth of biological networks makes the problem more challenging. As a consequence, parallel algorithms have been developed and distributed computing has been tested in the cloud computing environment as well. In this paper, we survey current algorithms for network motif detection and existing software tools. Then, we show that some methods have been utilized for parallel network motif search algorithms with static or dynamic load balancing techniques. With the advent of cloud computing services, network motif search has been implemented with MapReduce in Hadoop Distributed File System (HDFS), and with Storm, but without statistical testing. In this paper, we survey network motif search algorithms in general, including existing parallel methods as well as cloud computing based search, and show the promising potentials for the cloud computing based motif search methods. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motif detection is the search for statistically overrepresented subgraphs present in a larger target network. They are thought to represent key structure and control mechanisms. Although the problem is exponential in nature, several algorithms and tools have been developed for efficiently detecting network motifs. This work analyzes 11 network motif detection tools and algorithms. Detailed comparisons and insightful directions for using these tools and algorithms are discussed. Key aspects of network motif detection are investigated. Network motif types and common network motifs as well as their biological functions are discussed. Applications of network motifs are also presented. Finally, the challenges, future improvements and future research directions for network motif detection are also discussed. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Counting and enumeration of local topological structures, such as triangles, is an important task for analyzing large real-life networks. For instance, triangle count in a network is used to compute transitivity—an important property for understanding graph evolution over time. Triangles are also used for various other tasks completed for real-life networks, including community discovery, link prediction, and spam filtering. The task of triangle counting, though simple, has gained wide attention in recent years from the data mining community. This is due to the fact that most of the existing algorithms for counting triangles do not scale well to very large networks with millions (or even billions) of vertices. To circumvent this limitation, researchers proposed triangle counting methods that approximate the count or run on distributed clusters. In this paper, we discuss the existing methods of triangle counting, ranging from sequential to parallel, single-machine to distributed, exact to approximate, and off-line to streaming. We also present experimental results of performance comparison among a set of approximate triangle counting methods built under a unified implementation framework. Finally, we conclude with a discussion of future works in this direction. ::: ::: For further resources related to this article, please visit the WIREs website. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motifs provide an enlightening insight into uncovering the structural design principles of complex networks across multifarious disciplines, such as physics, biology, social science, engineering, and military science. Measures for network motifs play an indispensable role in the procedures of motif measurement and evaluation which are crucial steps in motif detection, counting, and clustering. However, there is a relatively small body of literature concerned with measures for network motifs. In this paper, we review the measures for network motifs in two categories: structural measures and statistical measures. The application scenarios for each measure and the distinctions of measures in similar scenarios are also summarized. We also conclude the challenges for using these measures and put forward some future directions on this topic. Overall, the objective of this survey is to provide an overview of motif measures, which is anticipated to shed light on the theory and practice of complex networks. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Other Surveys and Related Work <s> Network motifs are the building blocks of complex networks. Studying these frequently occurring patterns disclose a lot of information about these networks. The applications of Network motifs are very much evident now-a-days, in almost every field including biological networks, World Wide Web (WWW), etc. Some of the important motifs are feed forward loops, bi-fan, bi-parallel, fully connected triads. But, discovering these motifs is a computationally challenging task. In this paper, various techniques that are used to discover motifs are presented, along with detailed discussions on several issues and challenges in this area. <s> BIB008
To the best of our knowledge there is no other comparable work to this survey in terms of scope, thoroughness and recency. Most of the already existing surveys that deal with subgraph counting are directly related to network motif discovery. Some of them are from before 2015 and therefore predate many of the most recent algorithmic advances BIB004 BIB003 BIB001 BIB005 BIB002 , and all of them only present a small subset of the strategies discussed here. There are more recent review papers, but they all differ from our work and have a much smaller scope. Al Hasan and Dave BIB006 only consider triangle counting, Xia et al. BIB007 focus mainly on significance metrics, and finally, while we here present a structured overview of more than 50 exact, approximate and parallel algorithmic approaches, Jain and Patgiri BIB008 presents a much simpler description of 5 different algorithms.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Classical methods. <s> Motifs in a network are small connected subnetworks that occur in significantly higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Kashtan et al. [Bioinformatics, 2004] proposed a sampling algorithm for efficiently performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from sampling bias and is only efficient when the motifs are small (3 or 4 nodes). Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Experiments on a testbed of biological networks show our algorithm to be orders of magnitude faster than previous approaches. This allows for the detection of larger motifs in bigger networks than was previously possible, facilitating deeper insight into the field. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Classical methods. <s> Network motifs are small connected sub-graphs occurring at significantly higher frequencies in a given graph compared with random graphs of similar degree distribution. Recently, network motifs have attracted attention as a tool to study networks microscopic details. The commonly used algorithm for counting small-scale motifs is the one developed by Milo et al. This algorithm is extremely costly in CPU time and actually cannot work on large networks, consisting of more than 100,000 edges on current CPUs. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Classical methods. <s> BackgroundComplex networks are studied across many fields of science and are particularly important to understand biological processes. Motifs in networks are small connected sub-graphs that occur significantly in higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Existing algorithms for finding network motifs are extremely costly in CPU time and memory consumption and have practically restrictions on the size of motifs.ResultsWe present a new algorithm (Kavosh), for finding k-size network motifs with less memory and CPU time in comparison to other existing algorithms. Our algorithm is based on counting all k-size sub-graphs of a given graph (directed or undirected). We evaluated our algorithm on biological networks of E. coli and S. cereviciae, and also on non-biological networks: a social and an electronic network.ConclusionThe efficiency of our algorithm is demonstrated by comparing the obtained results with three well-known motif finding tools. For comparison, the CPU time, memory usage and the similarities of obtained motifs are considered. Besides, Kavosh can be employed for finding motifs of size greater than eight, while most of the other algorithms have restriction on motifs with size greater than eight. The Kavosh source code and help files are freely available at: http://Lbb.ut.ac.ir/Download/LBBsoft/Kavosh/. <s> BIB003
In the seminal work, Milo et al. first defined the concept of network motif and also proposed MFinder, an algorithm to count subgraphs. MFinder is a recursive backtracking algorithm, that is applied to each edge of the network. A given edge is initially stored on a set S, which is recursively grown using edges that are not in S but share one endpoint with at least one edge in S. When |S | = k, the algorithm checks if the subgraph induced by S has been found for the first time by keeping a hash table of subgraphs already found. If the subgraph was reached for the first time, the algorithm categorizes it and updates the hash table (otherwise, the subgraph is ignored). Another very important work, by Wernicke BIB001 , proposed a new algorithm called ESU, also known as FANMOD due to the graphical tool that uses ESU as its core algorithm . This algorithm greatly improved on MFinder by never counting the same subgraph twice, thus avoiding the need to store all subgraphs in a hash table. ESU applies the same recursive method to each vertex v of the input graph G: it uses two sets V S and V E , which initially are set as V S = {v} and V E = N (v). Then, for each vertex u in V E , it removes it from V E and makes V S = V S ∪ {u}, effectively adding it to the subgraph being enumerated and where v is the original vertex to be added to V S ). The N exc here makes sure we only grow the list of possibilities with vertices not already in V S and the condition L(u) > L(v) is used to break symmetries, consequently preventing any subgraph from being found twice. This process is done several times until V S has k elements, which means V S contains a single occurrence of a k-subgraph. At the end of the process, ESU performs isomorphism tests to assess the category of each subgraph occurrence, which is a considerable bottleneck. Itzhack et al. BIB002 proposed a new algorithm that was able to count subgraphs using constant memory (in relation to the size of the input graph). Itzhack et al. did not name their algorithm, so we will refer to it as Itzhack from here on. Itzhack avoids explicitly computing the isomorphism class of each counted subgraph by caching it for each different adjacency matrix, seen as a bitstring. This strategy only works for subgraphs of k up to 5, since it would use too much memory for higher values. Additionally, the enumeration algorithm is also different from ESU. This method is based on counting all subgraphs that include a certain vertex, then removing that node from the network and repeating the same procedure for the remaining nodes. For each vertex v, first the algorithm considers the tree composed of the k neighborhood of v, that is, a tree of all vertices at a distance of k − 1 or less from v. This is very similar to the tree obtained from performing a breadth-first search starting on v, with the difference that vertices that appear on previous levels of the tree are excluded if visited again. This tree can be traversed in a way that avoids actually creating it by following neighbors, and thus only using constant memory. To perform the actual search, the method uses the concept of counting patterns, which are different combinatorial ways of choosing vertices from different levels of the tree. For instance, if we are searching for 3-subgraphs, and considering that at the tree root level we can only have one vertex, we could have the combinations with pattern 1-2 (one vertex at root level 0, two vertices at level 1) or with pattern 1-1-1 (one vertex at root level 0, one at level 1 and one at level 2). In an analogous way, 4-subgraphs would lead to patterns 1-1-1-1, 1-1-2, 1-2-1 and 1-3. Itzhack et al. claimed that Itzhack is over 1,000 times faster than ESU, however the author of ESU disputed this claim in , stating that the experimental setup was faulty and claimed that Itzhack is only slightly faster than ESU (its speedup could be attributed mainly to the caching procedure). Kashani et al. BIB003 proposed a new algorithm called Kavosh. Like ESU and Itzhack, the core idea of the Kavosh is to find all subgraphs that include a particular vertex, then remove that vertex and continue from there iteratively. Its functioning is very similar to that of Itzhack: it builds an implicit breadth-first search tree and then uses a similar concept to the counting patterns used by Itzhack. However, it is a more general method since it does not perform any caching of isomorphism information, allowing the enumeration of larger subgraphs.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.1.2 <s> The study of biological networks and network motifs can yield significant new insights into systems biology. Previous methods of discovering network motifs - network-centric subgraph enumeration and sampling - have been limited to motifs of 6 to 8 nodes, revealing only the smallest network components. New methods are necessary to identify larger network sub-structures and functional motifs. ::: ::: Here we present a novel algorithm for discovering large network motifs that achieves these goals, based on a novel symmetry-breaking technique, which eliminates repeated isomorphism testing, leading to an exponential speed-up over previous methods. This technique is made possible by reversing the traditional network-based search at the heart of the algorithm to a motif-based search, which also eliminates the need to store all motifs of a given size and enables parallelization and scaling. Additionally, our method enables us to study the clustering properties of discovered motifs, revealing even larger network elements. ::: ::: We apply this algorithm to the protein-protein interaction network and transcription regulatory network of S. cerevisiae, and discover several large network motifs, which were previously inaccessible to existing methods, including a 29-node cluster of 15-node motifs corresponding to the key transcription machinery of S. cerevisiae. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.1.2 <s> Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.1.2 <s> Subgraph matching algorithms are used to find and enumerate specific interconnection structures in networks. By enumerating these specific structures/subgraphs, the fundamental properties of the network can be derived. More specifically in biological networks, subgraph matching algorithms are used to discover network motifs, specific patterns occurring more often than expected by chance. Finding these network motifs yields information on the underlying biological relations modelled by the network. In this work, we present the Index-based Subgraph Matching Algorithm with General Symmetries (ISMAGS), an improved version of the Index-based Subgraph Matching Algorithm (ISMA). ISMA quickly finds all instances of a predefined motif in a network by intelligently exploring the search space and taking into account easily identifiable symmetric structures. However, more complex symmetries (possibly involving switching multiple nodes) are not taken into account, resulting in superfluous output. ISMAGS overcomes this problem by using a customised symmetry analysis phase to detect all symmetric structures in the network motif subgraphs. These structures are then converted to symmetry-breaking constraints used to prune the search space and speed up calculations. The performance of the algorithm was tested on several types of networks (biological, social and computer networks) for various subgraphs with a varying degree of symmetry. For subgraphs with complex (multi-node) symmetric structures, high speed-up factors are obtained as the search space is pruned by the symmetry-breaking constraints. For subgraphs with no or simple symmetric structures, ISMAGS still reduces computation times by optimising set operations. Moreover, the calculated list of subgraph instances is minimal as it contains no instances that differ by only a subgraph symmetry. An implementation of the algorithm is freely available at https://github.com/mhoubraken/ISMAGS. <s> BIB003
Single-subgraph-search methods. The idea that it is possible to obtain a very efficient method of counting a single subgraph category was first noticed by Grochow and Kellis BIB001 . Their base method consists on a backtracking algorithm that is applied to each vertex. It tries to build a partial mapping from the input graph to the target subgraph (the subgraph it is trying to count) by building all possible assignments based on the number of neighbours. Grochow and Kellis also suggested an improvement based on symmetry breaking, using the automorphisms of the target subgraph to build set of conditions, of the form L(a) < L(b), to prevent the same subgraph from being counted multiple times. This symmetry breaking idea allowed for considerable improvements in runtime, specially for higher values of k. Grochow and Kellis did not name their algorithm, so we will refer to it as the Grochow algorithm from here on. Koskas et al. presented a new algorithm which they called NeMo. This method draws some ideas from Grochow, since it performs a backtrack based search with symmetry breaking in a similar fashion. Although, instead of using conditions on vertex labels, it finds the orbits of the target subgraph and forces an ordering between the labels of the vertices from the input graph that match vertices in the target subgraph with the same orbit. Additionally, it uses a few heuristics to prune the search early, such as ordering the vertices from the target graph such that for all 1 ≤ i ≤ k, its first i vertices are connected. ISMAGS, which is based on its predecessor ISMA BIB002 , was proposed by Houbraken et al. BIB003 . The base idea of this method is similar to the one in Grochow, however, the authors use a clever node ordering and other heuristics to speedup the partial mapping procedure. Additionally, their symmetry breaking conditions are significantly improved by applying several heuristic techniques based on group theory.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> We begin by tracing the history of the Reconstruction Conjecture (RC) for graphs. After describing the RC as the problem of reconstructing a graph G from a given deck of cards, each containing just one point-deleted subgraph of G, we proceed to derive information about G which is deducible from this deck. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> We consider the size and structure of the automorphism groups of a variety of empirical 'real-world' networks and find that, in contrast to classical random graph models, many real-world networks are richly symmetric. We construct a practical network automorphism group decomposition, relate automorphism group structure to network topology and discuss generic forms of symmetry and their origin in real-world networks. We also comment on how symmetry can affect network redundancy and robustness. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> In this paper we propose a novel specialized data structure that we call g-trie, designed to deal with collections of subgraphs. The main conceptual idea is akin to a prefix tree in the sense that we take advantage of common topology by constructing a multiway tree where the descendants of a node share a common substructure. We give algorithms to construct a g-trie, to list all stored subgraphs, and to find occurrences on another graph of the subgraphs stored in the g-trie. We evaluate the implementation of this structure and its associated algorithms on a set of representative benchmark biological networks in order to find network motifs. To assess the efficiency of our algorithms we compare their performance with other known network motif algorithms also implemented in the same common platform. Our results show that indeed, g-tries are a feasible, adequate and very efficient data structure for network motifs discovery, clearly outperforming previous algorithms and data structures. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> A motif in a network is a connected graph that occurs significantly more frequently as an induced subgraph than would be expected in a similar randomized network. By virtue of being atypical, it is thought that motifs might play a more important role than arbitrary subgraphs. Recently, a flurry of advances in the study of network motifs has created demand for faster computational means for identifying motifs in increasingly larger networks. Motif detection is typically performed by enumerating subgraphs in an input network and in an ensemble of comparison networks; this poses a significant computational problem. Classifying the subgraphs encountered, for instance, is typically performed using a graph canonical labeling package, such as Nauty, and will typically be called billions of times. In this article, we describe an implementation of a network motif detection package, which we call NetMODE. NetMODE can only perform motif detection for -node subgraphs when , but does so without the use of Nauty. To avoid using Nauty, NetMODE has an initial pretreatment phase, where -node graph data is stored in memory (). For we take a novel approach, which relates to the Reconstruction Conjecture for directed graphs. We find that NetMODE can perform up to around times faster than its predecessors when and up to around times faster when (the exact improvement varies considerably). NetMODE also (a) includes a method for generating comparison graphs uniformly at random, (b) can interface with external packages (e.g. R), and (c) can utilize multi-core architectures. NetMODE is available from netmode.sf.net. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> The ability to find and count subgraphs of a given network is an important non trivial task with multidisciplinary applicability. Discovering network motifs or computing graphlet signatures are two examples of methodologies that at their core rely precisely on the subgraph counting problem. Here we present the g-trie, a data-structure specifically designed for discovering subgraph frequencies. We produce a tree that encapsulates the structure of the entire graph set, taking advantage of common topologies in the same way a prefix tree takes advantage of common prefixes. This avoids redundancy in the representation of the graphs, thus allowing for both memory and computation time savings. We introduce a specialized canonical labeling designed to highlight common substructures and annotate the g-trie with a set of conditional rules that break symmetries, avoiding repetitions in the computation. We introduce a novel algorithm that takes as input a set of small graphs and is able to efficiently find and count them as induced subgraphs of a larger network. We perform an extensive empirical evaluation of our algorithms, focusing on efficiency and scalability on a set of diversified complex networks. Results show that g-tries are able to clearly outperform previously existing algorithms by at least one order of magnitude. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks' structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Determining the frequency of small subgraphs is an important computational task lying at the core of several graph mining methodologies, such as network motifs discovery or graphlet based measurements. In this paper we try to improve a class of algorithms available for this purpose, namely network-centric algorithms, which are based upon the enumeration of all sets of k connected nodes. Past approaches would essentially delay isomorphism tests until they had a finalized set of k nodes. In this paper we show how isomorphism testing can be done during the actual enumeration. We use a customized g-trie, a tree data structure, in order to encapsulate the topological information of the embedded subgraphs, identifying already known node permutations of the same subgraph type. With this we avoid redundancy and the need of an isomorphism test for each subgraph occurrence. We tested our algorithm, which we called FaSE, on a set of different real complex networks, both directed and undirected, showcasing that we indeed achieve significant speedups of at least one order of magnitude against past algorithms, paving the way for a faster network-centric approach. <s> BIB008 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Network motifs are small over represented patterns that have been used successfully to characterize complex networks. Current algorithmic approaches focus essentially on pure topology and disregard node and edge nature. However, it is often the case that nodes and edges can also be classified and separated into different classes. This kind of networks can be modeled by colored (or labeled) graphs. Here we present a definition of colored motifs and an algorithm for efficiently discovering them.We use g-tries, a specialized data-structure created for finding sets of subgraphs. G-Tries encapsulate common sub-structure, and with the aid of symmetry breaking conditions and a customized canonization methodology, we are able to efficiently search for several colored patterns at the same time. We apply our algorithm to a set of representative complex networks, showing that it can find colored motifs and outperform previous methods. <s> BIB009 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Network motifs are overly represented as topological patterns that occur more often in a given network than in random networks, and take on some certain functions in practical biological applications. Existing methods of detecting network motifs have focused on computational efficiency. However, detecting network motifs also presents huge challenges in computational and spatial complexity. In this paper, we provide a new approach for mining network motifs. First, all sub-graphs can be enumerated by adding edges and nodes progressively, using the backtracking method based on the associated matrix. Then, the associated matrix is standardized and the isomorphism sub-graphs are marked uniquely in combination with symmetric ternary, which can simulate the elements (-1,0,1) in the associated matrix. Taking advantage of the combination of the associated matrix and the backtracking method, our method reduces the complexity of enumerating sub-graphs, providing a more efficient solution for motif mining. From the results obtained, our method has shown higher speed and more extensive applicability than other similar methods. <s> BIB010 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> A network motif is a recurring subnetwork within a network, and it takes on certain functions in practical biological macromolecule applications. Previous algorithms have focused on the computational efficiency of network motif detection, but some problems in storage space and searching time manifested during earlier studies. The considerable computational and spacial complexity also presents a significant challenge. In this paper, we provide a new approach for motif mining based on compressing the searching space. According to the characteristic of the parity nodes, we cut down the searching space and storage space in real graphs and random graphs, thereby reducing the computational cost of verifying the isomorphism of sub-graphs. We obtain a new network with smaller size after removing parity nodes and the “repeated edges” connected with the parity nodes. Random graph structure and sub-graph searching are based on the Back Tracking Method; all sub-graphs can be searched for by adding edges progressively. Experimental results show that this algorithm has higher speed and better stability than its alternatives. <s> BIB011 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> Because of the complexity of biological networks, motif mining is a key problem in data analysis for such networks. Researchers have investigated many algorithms aimed at improving the efficiency of motif mining. Here we propose a new algorithm for motif mining that is based on dynamic programming and backtracking. In our method, firstly, we enumerate all of the 3-vertex sub graphs by the method ESU, and then we enumerate sub graphs of other sizes using dynamic programming for reducing the search time. In addition, we have also improved the backtracking application in searching sub graphs, and the improved backtracking can help us search sub graphs more roundly. Comparisons with other algorithms demonstrate that our algorithm yields faster and more accurate detection of motifs. <s> BIB012 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> With recent advances in high-throughput cell biology, the amount of cellular biological data has grown drastically. Such data is often modeled as graphs (also called networks) and studying them can lead to new insights into molecule-level organization. A possible way to understand their structure is by analyzing the smaller components that constitute them, namely network motifs and graphlets. Graphlets are particularly well suited to compare networks and to assess their level of similarity due to the rich topological information that they offer but are almost always used as small undirected graphs of up to five nodes, thus limiting their applicability in directed networks. However, a large set of interesting biological networks such as metabolic, cell signaling, or transcriptional regulatory networks are intrinsically directional, and using metrics that ignore edge direction may gravely hinder information extraction. Our main purpose in this work is to extend the applicability of graphlets to directed networks by considering their edge direction, thus providing a powerful basis for the analysis of directed biological networks. We tested our approach on two network sets, one composed of synthetic graphs and another of real directed biological networks, and verified that they were more accurately grouped using directed graphlets than undirected graphlets. It is also evident that directed graphlets offer substantially more topological information than simple graph metrics such as degree distribution or reciprocity. However, enumerating graphlets in large networks is a computationally demanding task. Our implementation addresses this concern by using a state-of-the-art data structure, the g-trie, which is able to greatly reduce the necessary computation. We compared our tool to other state-of-the art methods and verified that it is the fastest general tool for graphlet counting. <s> BIB013 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Encapsulation methods. <s> In this paper we propose PATCOMP—a PARTICIA-based novel approach for Network motif search. The algorithm takes advantage of compression and speed of PATRICIA data structure to store the collection of subgraphs in memory and search for classification and census of network. Paper also describes the structure of PATRICIA nodes and how data structure is developed for using it for counting of subgraphs. The main benefit of this approach is significant reduction in memory space requirement particularly for larger network motifs with acceptable time performance. To assess the effectiveness of PATRICIA-based approach we compared the performance (memory and time) of this proposed approach with QuateXelero. The experiments with different networks like ecoli and yeast validate the advantage of PATRICIA-based approach in terms of reduction in memory usage by 4.4–20% for E. coli and 5.8–23.2% for yeast networks. <s> BIB014
The ideas applied to Grochow introduced a way of escaping the classic setup of enumerating and then categorizing subgraphs, albeit focusing on a single subgraph. The next step would be to extend this idea to a more general algorithm, which is appropriate to a full subgraph counting. This was first done by Ribeiro and Silva BIB003 using a new data-structure they called the g-trie, for graph trie. The g-trie is a prefix tree for graphs, each node represents a different graph, where the graph of a parent node has shared common substructures with the graph of its child node, which are characterized precisely by the vertices of the graph of the child node. The root represents the one vertex graph with one child, a node representing the edge graph, which in turn has two children representing the triangle graph and the 3-path, and so on. This tree can be augmented by giving each node symmetry breaking conditions similar to those from Grochow. The authors show how to efficiently build this data-structure and augment with the symmetry breaking conditions for any set of graphs. Also, they describe a subgraph counting algorithm based on using this data-structure along with an enumeration technique similar to that of Grochow. However, since this data-structure encapsulates the information of multiple graphs in an hierarchical order, it achieves a much faster full subgraph counting algorithm. The usage of this data-structure has been significantly extended since its original publication, such as a version for colored networks BIB009 or an orbit aware version BIB013 . A more detailed discussion of the data-structure and the subgraph counting algorithm is presented in BIB006 . Also, even though the subgraph counting algorithm was not named, we will refer to it as the Gtrie algorithm from here on. Gtrie encapsulates common topological information of the subgraphs being counted, but there are other approaches, such as Li et al. BIB004 , who developed Netmode. It builds on Kavosh, by using its enumeration algorithm, but instead of using nauty to perform the categorization step, it makes use of a cache to store isomorphism information and thus is able to perform it in constant time. This is very similar to what Itzhack does, however, Li et al. suggested an improvement that allows Netmode to scale to k = 6 without using too much memory. This improvement is based on the reconstruction conjecture BIB001 , that states that two graphs with 3 or more vertices are isomorphic if their deck (the set of isomorphism classes of all vertex-deleted subgraphs of a graph) is the same. This is known to be false for directed graphs with k = 6, but there are very few counter-examples that can be directly stored such as in the k ≤ 5 case, thus Netmode applies the conjecture for all the remaining cases by building their deck, hashing its value and storing its count in a table. Wang et al. BIB005 proposed a new method called SCMD that counts subgraphs in compressed networks. SCMD applies a symmetry compression method that finds sets of vertices that are in an homeomorphism to cliques or empty subgraphs, which have the additional property that any other vertex that connects to a vertex in the set is connected to all other vertices in the set. These sets of vertices form a partition of the graph that is obtained using a method published in BIB002 , which is based on looking at vertices in the same orbit. This is a versatile method that can use algorithms like ESU or Kavosh to enumerate all subgraphs of sizes from 1 to k in the compressed network. Finally, SCMD "decompresses" the results by looking at all the different enumerated subgraphs and calculating all the combinations that can form a decompressed subgraph. For example, for k = 3, if a compressed 2-subgraph is found containing two vertices: one compressed vertex representing a clique of 5 uncompressed vertices and a compressed vertex representing a single vertex from the uncompressed graph, it results in 2 + 5 3 triangles from the uncompressed graph, obtained by taking two vertices from the clique vertex and one from the other vertex, which are all connected and thus form a triangle, 2 , plus taking three vertices from the clique vertex 5 3 . The authors argue that most complex networks exhibit high symmetries and thus are improved by the application of this technique. Even though their work only includes undirected graphs, the authors affirm it is easy to extend the same concepts to directed networks. Xu et al. described another algorithm that enumerates subgraphs on compressed networks, called ENSA BIB010 BIB011 . Their method is based on an heuristic graph isomorphism algorithm, and they also discuss an optimization based on identifying vertices with unique degrees. Following the ideas first applied in Gtrie, Khakabimamaghani et al. BIB007 proposed a new algorithm they called Quatexelero. Quatexelero is built upon any incremental enumeration algorithm, like ESU, and it implements a data structure similar to a quaternary tree. Each node in the tree represents a graph, that can be built by looking into the nodes from the path from it to the root of the tree. Additionally, all graphs represented by a single node belong to the same isomorphism class. To fill the tree, initially a pointer to the root of the tree is set. Whenever a new vertex is added to the partial enumeration map, Quatexelero looks into the existing edges between the newly added node and the previously existing nodes in the mapping and stores its information in the quaternary tree. For each vertex in the mapping, depending on whether there is no edge, an inedge, an outedge or a biedge between it and the newly added vertex, the pointer is assigned to one of its four children, creating it if it was nonexistent. Parallel to the publishing of the work of Quatexelero, Paredes and Ribeiro BIB008 proposed FaSE. The idea of FaSE is similar to the one from Quatexelero, however, instead of using a quaternary tree, it uses a data-structure similar to the g-trie, albeit without the symmetry breaking condition augmentation. This data-structure has the same property as the quaternary tree that every node represents a graph and each node is built using the adjacency information of a newly added vertex in relation to the vertices present in its parent. Other works that extend these ideas have been proposed subsequently. For example, Jing and Cheng propose Hash-ESU, an algorithm based on the same idea from Quatexelero and FaSE, but which hashes the adjacency information instead of storing it in a tree. Another example is the work by Song et al. BIB012 . They describe a method that starts by enumerating all k = 3 subgraphs using ESU and then use dynamic programming to grow connected sets and perform the counting. Their algorithm was not named, so we will refer to it as the Song algorithm from here on. Both Quatexelero and FaSE have potential memory issues, since there may be several nodes representing the same graph, which is not a problem for Gtrie since it only stores one copy of each possible graph. To address this, Himamshu and Jain BIB014 proposed Patcomp. Their method compresses the quaternary tree using a technique similar to a radix tree, however, their method is 2 to 3 times slower and only saves around 10% of the memory usage.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> Motivation: Small-induced subgraphs called graphlets are emerging as a possible tool for exploration of global and local structure of networks and for analysis of roles of individual nodes. One of the obstacles to their wider use is the computational complexity of algorithms for their discovery and counting. Results: We propose a new combinatorial method for counting graphlets and orbit signatures of network nodes. The algorithm builds a system of equations that connect counts of orbits from graphlets with up to five nodes, which allows to compute all orbit counts by enumerating just a single one. This reduces its practical time complexity in sparse graphs by an order of magnitude as compared with the existing pure enumeration-based algorithms. Availability and implementation: Source code is available freely at http://www.biolab.si/supp/orca/orca.html. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> The prevalence of select substructures is an indicator of network effects in applications such as social network analysis and systems biology. Moreover, subgraph statistics are pervasive in stochastic network models, and they need to be assessed repeatedly in MCMC sampling and estimation algorithms. We present a new approach to count all induced and non-induced 4-node subgraphs the quad census on a per-node and per-edge basis, complete with a separation into their non-automorphic roles in these subgraphs. It is the first approach to do so in a unified manner, and is based on only a clique-listing subroutine. Computational experiments indicate that, despite its simplicity, the approach outperforms previous, less general approaches. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> Graphlet analysis is an approach to network analysis that is particularly popular in bioinformatics. We show how to set up a system of linear equations that relate the orbit counts and can be used in an algorithm that is significantly faster than the existing approaches based on direct enumeration of graphlets. The approach presented in this paper presents a generalization of the currently fastest method for counting 5-node graphlets in bioinformatics. The algorithm requires existence of a vertex with certain properties; we show that such vertex exists for graphlets of arbitrary size, except for complete graphs and a cycle with four nodes, which are treated separately. Empirical analysis of running time agrees with the theoretical results. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> Motivation: Graphlets are a useful tool to determine a graph's small-scale structure. Finding them is exponentially hard with respect to the number of nodes in each graphlet. Therefore, equations can be used to reduce the size of graphlets that need to be enumerated to calculate the number of each graphlet touching each node. Hocevar and Demsar first introduced such equations, which were derived manually, and an algorithm that uses them, but only graphlets with four or five nodes can be counted this way. Results: We present a new algorithm for orbit counting, which is applicable to graphlets of any order. This algorithm uses a tree structure to simplify finding orbits, and stabilizers and symmetry-breaking constraints to ensure correctness. This method gives a significant speedup compared to a brute force counting method and can count orbits beyond the capacity of other available tools. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 3.2.1 <s> Graphlets are useful for bioinformatics network analysis. Based on the structure of Hocevar and Demsar’s ORCA algorithm, we have created an orbit counting algorithm, named Jesse. This algorithm, like ORCA, uses equations to count the orbits, but unlike ORCA it can count graphlets of any order. To do so, it generates the required internal structures and equations automatically. Many more redundant equations are generated, however, and Jesse’s running time is highly dependent on which of these equations are used. Therefore, this paper aims to investigate which equations are most efficient, and which factors have an effect on this efficiency. With appropriate equation selection, Jesse’s running time may be reduced by a factor of up to 2 in the best case, compared to using randomly selected equations. Which equations are most efficient depends on the density of the graph, but barely on the graph type. At low graph density, equations with terms in their right-hand side with few arguments are more efficient, whereas at high density, equations with terms with many arguments in the right-hand side are most efficient. At a density between 0.6 and 0.7, both types of equations are about equally efficient. Our Jesse algorithm became up to a factor 2 more efficient, by automatically selecting the best equations based on graph density. It was adapted into a Cytoscape App that is freely available from the Cytoscape App Store to ease application by bioinformaticians. <s> BIB005
Matrix based methods. The first known method to apply a practical analytic approach based on matrix multiplication to subgraph counting was ORCA, a work by Hočevar and Demšar BIB001 , which is based on counting orbits and not directly subgraphs. Their original work was targeted at orbits in subgraphs up to 5 vertices and, because of that, they count induced subgraphs specifically, while most analytic approaches count non-induced occurrences. ORCA works by setting up a system of linear equations per vertex of the input graph that relate different orbit frequencies, which are the system's variables. This system of linear equations contains information about the input graph. By construction, the matrix has a rank equal to the number of orbits minus 1, thus to solve it one only need to find the value of one the orbit frequencies and use any standard linear algebra method to solve it. Usually, the orbit pertaining to the clique is chosen, since there are efficient algorithms to count this orbit and, for sparse enough networks, it is usually the one with the least occurrences, making it less expensive to count. Later, the authors of ORCA extended their work by suggesting a way of producing equations for arbitrary sized subgraphs BIB003 , although their available practical implementation is still limited to size 5 [64] . Another possible extension for ORCA was proposed by BIB004 with the Jesse algorithm, which was further complemented with a strategy for optimizing the computation by carefully selecting less expensive equations BIB005 . Similar to ORCA, but using a different strategy, Ortmann and Brandes BIB002 proposed a new method, which they further improved and better described in . They also target orbits, but for subgraphs of size up to 4. Their approach is based on looking into non-induced subgraphs using them to build linear equations that are less expensive to compute. Additionally, they also apply an improved clique counting algorithm. Ortmann and Brandes BIB002 did not name their algorithm, so we will refer to it as the Ortmann algorithm from here on.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> World Wide Web, the Internet, coupled biological and chemical systems, neural networks, and social interacting species, are only a few examples of systems composed by a large number of highly interconnected dynamical units. These networks contain characteristic patterns, termed network motifs, which occur far more often than in randomized networks with the same degree sequence. Several algorithms have been suggested for counting or detecting the number of induced or non-induced occurrences of network motifs in the form of trees and bounded treewidth subgraphs of size O(logn), and of size at most 7 for some motifs. ::: ::: In addition, counting the number of motifs a node is part of was recently suggested as a method to classify nodes in the network. The promise is that the distribution of motifs a node participate in is an indication of its function in the network. Therefore, counting the number of network motifs a node is part of provides a major challenge. However, no such practical algorithm exists. ::: ::: We present several algorithms with time complexity $O\left(e^{2k}k\cdot n \cdot |E|\cdot \right.$ $\left.\log\frac{1}{\delta}/{\epsilon^2}\right)$ that, for the first time, approximate for every vertex the number of non-induced occurrences of the motif the vertex is part of, for k-length cycles, k-length cycles with a chord, and (k − 1)-length paths, where k = O(logn), and for all motifs of size of at most four. In addition, we show algorithms that approximate the total number of non-induced occurrences of these network motifs, when no efficient algorithm exists. Some of our algorithms use the color coding technique. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Counting network motifs has an important role in studying a wide range of complex networks. However, when the network size is large, as in the case of Internet Topology and WWW graphs counting the number of motifs becomes prohibitive. Devising efficient motif counting algorithms thus becomes an important goal. In this paper, we present efficient counting algorithms for 4-nodemotifs. We show how to efficiently count the total number of each type of motif, and the number of motifs adjacent to a node. We further present a new algorithm for node position-aware motif counting, namely partitioning the motif count by the node position in the motif. Since our algorithm is based on motifs, which are non-induced we also show how to calculate the count of induced motifs given the non-induced motif count. Finally, we report on initial implementation performance result using evaluation on a large-scale graph. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Counting network graphlets (and motifs) was shown to have an important role in studying a wide range of complex networks. However, when the network size is large, as in the case of the Internet topology and WWW graphs, counting the number of graphlets becomes prohibitive for graphlets of size 4 and above. Devising efficient graphlet counting algorithms thus becomes an important goal. In this paper, we present efficient counting algorithms for 4-node graphlets. We show how to efficiently count the total number of each type of graphlet, and the number of graphlets adjacent to a node. We further present a new algorithm for node position-aware graphlet counting, namely partitioning the graphlet count by the node position in the graphlet. Since our algorithms are based on non-induced graphlet count, we also show how to calculate the count of induced graphlets given the non-induced count. We implemented our algorithms on a set of both synthetic and real-world graphs. Our evaluation shows that the algorithms are scalable and perform up to 30 times faster than the state-of-the-art. We then apply the algorithms on the Internet Autonomous Systems (AS) graph, and show how fast graphlet counting can be leveraged for efficient and scalable classification of the ASes that comprise the Internet. Finally, we present RAGE, a tool for rapid graphlet enumeration available online. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Network motif algorithms have been a topic of research mainly after the 2002-seminal paper from Milo \emph{et al}, that provided motifs as a way to uncover the basic building blocks of most networks. This article proposes new algorithms to exactly count isomorphic pattern motifs of size~3 and~4 in directed graphs. The algorithms are accelerated by combinatorial techniques. Let $G(V, E)$ be a directed graph with $m=|E|$. We describe an $O({m\sqrt{m}})$ time complexity algorithm to count isomorphic patterns of size~3. To counting isomorphic patterns of size~4, we propose an $O(m^2)$ algorithm. The new algorithms were implemented and compared with Fanmod motif detection tool. The experiments show that our algorithms are expressively faster than Fanmod. We also let our tool to detect motifs, the {\sc acc-MOTIF}, available in the Internet. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Network motif algorithms have been a topic of research mainly after the 2002-seminal paper from Milo et al. [1], which provided motifs as a way to uncover the basic building blocks of most networks. Motifs have been mainly applied in Bioinformatics, regarding gene regulation networks. Motif detection is based on induced subgraph counting. This paper proposes an algorithm to count subgraphs of size k + 2 based on the set of induced subgraphs of size k. The general technique was applied to detect 3, 4 and 5-sized motifs in directed graphs. Such algorithms have time complexity O(a(G)m), O(m2) and O(nm2), respectively, where a(G) is the arboricity of G(V,E). The computational experiments in public data sets show that the proposed technique was one order of magnitude faster than Kavosh and FANMOD. When compared to NetMODE, acc-Motif had a slightly improved performance. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Decomposition methods. <s> Counting the frequency of small subgraphs is a fundamental technique in network analysis across various domains, most notably in bioinformatics and social networks. The special case of triangle counting has received much attention. Getting results for 4-vertex or 5-vertex patterns is highly challenging, and there are few practical results known that can scale to massive sizes. ::: ::: We introduce an algorithmic framework that can be adopted to count any small pattern in a graph and apply this framework to compute exact counts for all 5-vertex subgraphs. Our framework is built on cutting a pattern into smaller ones, and using counts of smaller patterns to get larger counts. Furthermore, we exploit degree orientations of the graph to reduce runtimes even further. These methods avoid the combinatorial explosion that typical subgraph counting algorithms face. We prove that it suffices to enumerate only four specific subgraphs (three of them have less than 5 vertices) to exactly count all 5-vertex patterns. ::: ::: We perform extensive empirical experiments on a variety of real-world graphs. We are able to compute counts of graphs with tens of millions of edges in minutes on a commodity machine. To the best of our knowledge, this is the first practical algorithm for 5-vertex pattern counting that runs at this scale. A stepping stone to our main algorithm is a fast method for counting all 4-vertex patterns. This algorithm is typically ten times faster than the state of the art 4-vertex counters. <s> BIB008
Before ORCA was proposed, the first ever practical method that used an analytic approach to subgraph counting was Rage, by Marcus and Shavitt BIB002 BIB003 . Their method is based on BIB001 which employs similar techniques but with a more theoretical focus. Rage targets non-induced subgraphs and orbits of size 3 and 4. It does so by running a different algorithm for each of the 8 existing subgraphs. Each algorithm is based on merging the neighborhoods of pairs of vertices to ensure that a given quartet of vertices have the desired edges to form a certain subgraph. acc-Motif, which was proposed by Meira et al. BIB004 and then further improved in BIB005 , was also one of the first methods to employ an analytic strategy, but stands out as the only known analytic method that also works for directed subgraphs. acc-Motif also targets non-induced subgraphs and their latest version supports up to size 6 subgraphs. Another method that followed this trend of decomposition methods is PGD, proposed by Ahmed et al. BIB006 BIB007 . This method builds on the classic triangle counting algorithm to count several primitives that are then used to obtain the frequency of each subgraph and orbit. It is currently one of the fastest methods, however it can only count undirected subgraphs of size 3 and 4. Additionally, as most analytic methods, it is highly parallelizible. Due to its versatile nature, PGD has been expanded to other frequency metrics and it stands out as one of the only available efficient methods that can count motifs incident to a vertex or edge of the graph , in what is called a "local subgraph count". More recently, ESCAPE was proposed by Pinar et al. BIB008 . This method is based on a divide and conquer approach that identifies substructures of each counting subgraph to partition them into smaller patterns. It is a very general method, but with the correct choices for decomposition, it is possible to describe a set of formulas to compute the frequency of each subgraph. The original paper only describes the resulting formulas to subgraphs up to size 5, however larger sizes can be obtained with some effort. As of this writing, it is possibly the most efficient algorithm to count undirected subgraphs and orbits up to size 5.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> We give two algorithms for listing all simplicial vertices of a graph. The first of these algorithms takes O(nα) time, where n is the number of vertices in the graph and O(nα) is the time needed to perform a fast matrix multiplication. The second algorithm can be implemented to run in \(O(e^{\tfrac{{2\alpha }}{{\alpha + 1}}} ) = O(e^{1.41} )\), where e is the number of edges in the graph. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> Network motifs are small connected sub-graphs occurring at significantly higher frequencies in a given graph compared with random graphs of similar degree distribution. Recently, network motifs have attracted attention as a tool to study networks microscopic details. The commonly used algorithm for counting small-scale motifs is the one developed by Milo et al. This algorithm is extremely costly in CPU time and actually cannot work on large networks, consisting of more than 100,000 edges on current CPUs. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> World Wide Web, the Internet, coupled biological and chemical systems, neural networks, and social interacting species, are only a few examples of systems composed by a large number of highly interconnected dynamical units. These networks contain characteristic patterns, termed network motifs, which occur far more often than in randomized networks with the same degree sequence. Several algorithms have been suggested for counting or detecting the number of induced or non-induced occurrences of network motifs in the form of trees and bounded treewidth subgraphs of size O(logn), and of size at most 7 for some motifs. ::: ::: In addition, counting the number of motifs a node is part of was recently suggested as a method to classify nodes in the network. The promise is that the distribution of motifs a node participate in is an indication of its function in the network. Therefore, counting the number of network motifs a node is part of provides a major challenge. However, no such practical algorithm exists. ::: ::: We present several algorithms with time complexity $O\left(e^{2k}k\cdot n \cdot |E|\cdot \right.$ $\left.\log\frac{1}{\delta}/{\epsilon^2}\right)$ that, for the first time, approximate for every vertex the number of non-induced occurrences of the motif the vertex is part of, for k-length cycles, k-length cycles with a chord, and (k − 1)-length paths, where k = O(logn), and for all motifs of size of at most four. In addition, we show algorithms that approximate the total number of non-induced occurrences of these network motifs, when no efficient algorithm exists. Some of our algorithms use the color coding technique. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> For a pattern graph H on k nodes, we consider the problems of finding and counting the number of (not necessarily induced) copies of H in a given large graph G on n nodes, as well as finding minimum weight copies in both node-weighted and edge-weighted graphs. Our results include: The number of copies of an H with an independent set of size s can be computed exactly in O*(2s nk-s+3) time. A minimum weight copy of such an H (with arbitrary real weights on nodes and edges) can be found in O(4s+o(s) nk-s+3) time. (The O* notation omits (k) factors.) These algorithms rely on fast algorithms for computing the permanent of a k x n matrix, over rings and semirings. The number of copies of any H having minimum (or maximum) node-weight (with arbitrary real weights on nodes) can be found in O(nω k/3 + n2k/3+o(1)) time, where ω < 2.4 is the matrix multiplication exponent and k is divisible by 3. Similar results hold for other values of k. Also, the number of copies having exactly a prescribed weight can be found within this time. These algorithms extend the technique of Czumaj and Lingas (SODA 2007) and give a new (algorithmic) application of multiparty communication complexity. Finding an edge-weighted triangle of weight exactly 0 in general graphs requires Ω(n2.5-ε) time for all ε > 0, unless the 3SUM problem on N numbers can be solved in O(N2 - ε) time. This suggests that the edge-weighted problem is much harder than its node-weighted version. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> In this paper we present a modification of a technique by Chiba and Nishizeki [Chiba and Nishizeki: Arboricity and Subgraph Listing Algorithms, SIAM J. Comput. 14(1), pp. 210--223 (1985)]. Based on it, we design a data structure suitable for dynamic graph algorithms. We employ the data structure to formulate new algorithms for several problems, including counting subgraphs of four vertices, recognition of diamond-free graphs, cop-win graphs and strongly chordal graphs, among others. We improve the time complexity for graphs with low arboricity or h-index. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> We present a general technique for detecting and counting small subgraphs. It consists of forming special linear combinations of the numbers of occurrences of different induced subgraphs of fixed size in a graph. These combinations can be efficiently computed by rectangular matrix multiplication. Our two main results utilizing the technique are as follows. Let $H$ be a fixed graph with $k$ vertices and an independent set of size $s.$ 1. Detecting if an $n$-vertex graph contains a (not necessarily induced) subgraph isomorphic to $H$ can be done in time $O(n^{\omega(\lceil (k-s)/2 \rceil, 1, \lfloor (k-s)/2 \rfloor )})$, where $\omega (p,q,r)$ is the exponent of fast arithmetic matrix multiplication of an $n^p\times n^q$ matrix by an $n^q\times n^r$ matrix. 2. When $s=2,$ counting the number of (not necessarily induced) subgraphs isomorphic to $H$ can be done in the same time, i.e., in time $O(n^{\omega(\lceil (k-2)/2 \rceil, 1, \lfloor (k-2)/2 \rfloor )}).$ It follows in particular that we can count the nu... <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Theoretical Results <s> Graphs are extremely versatile and ubiquitous mathematical structures with potential to model a wide range of domains. For this reason, graph problems have been of interest since the early days of computer science. Some of these problems consider substructures of a graph that have certain properties. These substructures of interest, generally called patterns, are often meaningful in the domain being modeled. Classic examples of patterns include spanning trees, cycles and subgraphs. ::: This thesis focuses on the topic of explicitly listing all the patterns existing in an input graph. One of the defining features of this problem is that the number of patterns is frequently exponential on the size of the input graph. Thus, the time complexity of listing algorithms is parameterized by the size of the output. ::: The main contribution of this work is the presentation of optimal algorithms for four different problems of listing patterns in graphs, namely the listing of k-subtrees, k-subgraphs, st-paths and cycles. The algorithms presented are framed within the same generic approach, based in a recursive partition of the search space that divides the problem into subproblems. The key to an efficient implementation of this approach is to avoid recursing into subproblems that do not list any patterns. With this goal in sight, a dynamic data structure, called the certificate, is introduced and maintained throughout the recursion. Moreover, properties of the recursion tree and lower bounds on the number of patterns are used to amortize the cost of the algorithm on the size of the output. <s> BIB007
Even though the focus of this work is to look at the proposed practical algorithms, it is important to note that some of the existing work drew inspiration from numerous more theoretical-oriented works. Thus, it is of relevance to briefly summarize some of the achievements in this area and we will do so with a special interest in those that directly influenced some of the algorithms discussed in this section. The first interest in subgraph counting stemmed from the world of enumeration algorithms. The book "Enumeration in Graphs" surveyed several methods to enumerate several different structures in a graph, such as cycles, trees or cliques. Even though these are specific subpatterns, they often represent the fundamental computation that needs to be done in order to enumerate any subgraph. These ideas were translated into works that count subgraphs by efficiently enumerating simpler substructures like these BIB002 BIB001 . Approximation schemes can also be developed with this in mind, which approximates the frequency of several subgraph families like cycles or paths and then generalize these for all size 4 subgraphs BIB003 . Another example of an initially purely theoretical technique is the work by Kowaluk et al. BIB006 , which was one of the inspirations for the multitude of matrix based analytic algorithms for counting subgraphs. In fact, the most efficient algorithms are based on several theoretical foundations that allow a tighter analysis of runtime. Due to this interplay, it is worth mentioned a few more recent papers on subgraph counting and enumerating. There is an interest in finding efficient algorithms that are parameterized or sensitive to certain properties of the graph, such as independent sets BIB004 or its maximum degree . Another current interest is in counting and enumerating subgraphs in a dynamic or online environment BIB005 . Finally, another active theoretical topic is to find optimal algorithms for enumeration, as in BIB007 , as well as proving lower bounds on their time complexity, as Björklund et al. does for triangle listing.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Randomised Enumeration <s> Motifs in a network are small connected subnetworks that occur in significantly higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Kashtan et al. [Bioinformatics, 2004] proposed a sampling algorithm for efficiently performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from sampling bias and is only efficient when the motifs are small (3 or 4 nodes). Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Experiments on a testbed of biological networks show our algorithm to be orders of magnitude faster than previous approaches. This allows for the detection of larger motifs in bigger networks than was previously possible, facilitating deeper insight into the field. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Randomised Enumeration <s> Determining the frequency of small subgraphs is an important graph mining primitive. One major class of algorithms for this task is based upon the enumeration of all sets of \(k\) connected nodes. These are known as network-centric algorithms. FAst Subgraph Enumeration (FaSE) is a exact algorithm for subgraph counting that contrasted with its past approaches by performing the isomorphism tests while doing the enumeration, encapsulating the topological information in a g-trie and thus largely reducing the number of required isomorphism tests. Our goal with this paper is to expand this approach by providing an approximate algorithm, which we called Rand-FaSE. It uses an unbiased sampling estimator for the number of subgraphs of each type, allowing an user to trade some accuracy for even faster execution times. We tested our algorithm on a set of representative complex networks, comparing it with the exact alternative, FaSE. We also do an extensive analysis by studying its accuracy and speed gains against previous sampling approaches. With all of this, we believe FaSE and Rand-FaSE pave the way for faster network-centric census algorithms. <s> BIB002
These algorithms are adaptations of older enumeration algorithms that perform exact counting. They have the particularity that they all induce a tree-like search space in the computation, where the leaves are the subgraph occurrences, and thus perform the approximation in a similar manner. Each level of the search tree is assigned a value, p i , which denotes the probability of transitioning from parent node to the child node in the tree. In this scheme, each leaf in this tree is reachable with probability P = k i=1 p i and the frequency of each subgraph is estimated using the number of samples obtained of that subgraph divided by P. Figure 5 illustrates how probabilities are added to the search tree. In this specific example, which could be equivalent to searching subgraphs of size 4, the first two levels of the tree have probability 100%, so their successors are all explored. On the other hand, in the last two levels, the probability of exploring a node in the tree is only 80%, therefore some nodes, marked as grey, are not visited. The first algorithm to implement this strategy was RAND-ESU by Wernicke BIB001 , an approximate version of ESU (described in Section 3.1.1). Recall that ESU maintains two sets V S and V E , the set of vertices in the subgraph and the set of candidate vertices for extending the subgraph. When adding a vertex from V E to V S , this vertex is added with probability p |V S | , where |V S | is the depth of the search tree. Using the more efficient g-trie data structure, Ribeiro and Silva proposed RAND-GTrie and Paredes and Ribeiro BIB002 proposed RAND-FaSE. Each level of the g-trie is assigned a probability, p i . When adding a new vertex to a subgraph of size d, corresponding to depth d in the g-trie, this is done with probability p d .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> Majority of the existing works on network analysis study properties that are related to the global topology of a network. Examples of such properties include diameter, power-law exponent, and spectra of graph Laplacian. Such works enhance our understanding of real-life networks, or enable us to generate synthetic graphs with real-life graph properties. However, many of the existing problems on networks require the study of local topological structures of a network, which did not get the deserved attention in the existing works. In this work, we use graphlet frequency distribution (GFD) as an analysis tool for understanding the variance of local topological structure in a network; we also show that it can help in comparing, and characterizing real-life networks. The main bottleneck to obtain GFD is the excessive computation cost for obtaining the frequency of each of the graphlets in a large network. To overcome this, we propose a simple, yet powerful algorithm, called Graft , that obtains the approximate graphlet frequency for all graphlets that have up-to five vertices. Comparing to an exact counting algorithm, our algorithm achieves a speedup factor between 10 and 100 for a negligible counting error, which is, on average, less than 5 percent. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> We study the problem of approximating the 3-profile of a large graph. 3-profiles are generalizations of triangle counts that specify the number of times a small graph appears as an induced subgraph of a large graph. Our algorithm uses the novel concept of 3-profile sparsifiers: sparse graphs that can be used to approximate the full 3-profile counts for a given large graph. Further, we study the problem of estimating local and ego 3-profiles, two graph quantities that characterize the local neighborhood of each vertex of a graph. Our algorithm is distributed and operates as a vertex program over the GraphLab PowerGraph framework. We introduce the concept of edge pivoting which allows us to collect 2-hop information without maintaining an explicit 2-hop neighborhood list at each vertex. This enables the computation of all the local 3-profiles in parallel with minimal communication. We test our implementation in several experiments scaling up to 640 cores on Amazon EC2. We find that our algorithm can estimate the 3-profile of a graph in approximately the same time as triangle counting. For the harder problem of ego 3-profiles, we introduce an algorithm that can estimate profiles of hundreds of thousands of vertices in parallel, in the timescale of minutes. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> We present a novel distributed algorithm for counting all four-node induced subgraphs in a big graph. These counts, called the $4$-profile, describe a graph's connectivity properties and have found several uses ranging from bioinformatics to spam detection. We also study the more complicated problem of estimating the local $4$-profiles centered at each vertex of the graph. The local $4$-profile embeds every vertex in an $11$-dimensional space that characterizes the local geometry of its neighborhood: vertices that connect different clusters will have different local $4$-profiles compared to those that are only part of one dense cluster. ::: Our algorithm is a local, distributed message-passing scheme on the graph and computes all the local $4$-profiles in parallel. We rely on two novel theoretical contributions: we show that local $4$-profiles can be calculated using compressed two-hop information and also establish novel concentration results that show that graphs can be substantially sparsified and still retain good approximation quality for the global $4$-profile. ::: We empirically evaluate our algorithm using a distributed GraphLab implementation that we scaled up to $640$ cores. We show that our algorithm can compute global and local $4$-profiles of graphs with millions of edges in a few minutes, significantly improving upon the previous state of the art. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Enumerate-Generalize <s> Recently exploring locally connected subgraphs (also known as motifs or graphlets) of complex networks attracts a lot of attention. Previous work made the strong assumption that the graph topology of interest is known in advance. In practice, sometimes researchers have to deal with the situation where the graph topology is unknown because it is expensive to collect and store all topological information. Hence, typically what is available to researchers is only a snapshot of the graph, i.e., a subgraph of the graph. Crawling methods such as breadth first sampling can be used to generate the snapshot. However, these methods fail to sample a streaming graph represented as a high speed stream of edges. Therefore, graph mining applications such as network traffic monitoring usually use random edge sampling (i.e., sample each edge with a fixed probability) to collect edges and generate a sampled graph, which we call a “ RESampled graph ”. Clearly, a RESampled graph's motif statistics may be quite different from those of the original graph. To resolve this, we propose a framework Minfer, which takes the given RESampled graph and accurately infers the underlying graph's motif statistics. Experiments using large scale datasets show the accuracy and efficiency of our method. <s> BIB005
The general idea of these algorithms is to perform an exact count on a smaller network that was obtained from the original one (e.g., a sample, or a compressed network). From the frequencies of each subgraph in the smaller network, the frequencies in the original network are estimated. Algorithms vary on (i) how the smaller network is obtained and on (ii) which estimator they use. The first example of an algorithm in this category is Targeted Node Processing (TNP) by Pržulj et al. . This algorithm is specially tailored for protein-protein interaction ,that, according to the authors, have a periphery that is sparser than the more central parts of the network. Using this information, it performs an exact count of the subgraphs in the periphery of the network and uses their frequencies to estimate the frequencies in the rest of the network. The authors claim that, due to the uniformity of the aforementioned networks, the distribution of the subgraphs in the fringe is representative of the distribution in the rest of the network. SCMD by Wang et al. BIB001 (already covered in Section 3.1.3) allows the use of any approximate counting method in the compressed graph. There is no guarantee that subgraphs are counted uniformly in the compressed graph, introducing a bias that needs to be corrected. The authors give the example of this bias when using their method in conjunction with RAND-ESU. If each leaf (subgraph) of depth k in the search tree is reached with probability P and a specific subgraph in the compressed graph is sampled with probability ρ, then, to correct the sampling bias, the probability of decompressing the relevant k-subgraph is P/ρ. In GRAFT, Rahman et al. BIB002 provide a strategy for counting undirected graphlets of size up to 5, using edge sampling. The algorithm starts by picking an edge e д from each of the 29 graphlets and a set of edges sampled from the graph S, without replacement. For each edge e ∈ S and for each graphlet д, the frequency of д is calculated such that e has the same position in д as e д (e is said to be aligned with e д ). These frequencies are summed for all edges and divided by a normalising factor, based on the automorphisms of each graphlet, which becomes the estimation for the frequency of that graphlet in the whole network. Note that if S is equal to E(G), the algorithm outputs an exact answer. Elenberg et al. create estimators for the frequency of size 3 BIB003 and 4 BIB004 subgraphs. A major difference from this work to previous ones is that Elenberg et al. estimate the frequencies of subgraphs that are not connected, besides the usual connected ones. The authors start by removing each edge from the network with a certain probability and computing the exact counts in this "sub-sampled" network. Then, they craft a set of linear equations that relate the exact counts on this smaller network to the ones of the original network. Using these equations, the estimation of the frequency of the subgraphs in the original network follows. Wang et al. BIB005 introduce an algorithm that aims to estimate the subgraph concentrations of a network when only a fraction of its edges are known. They call this a "RESampled Graph", obtained from the real network through random edge sampling, a common scenario on applications such as network traffic analysis. A key aspect of this algorithm is the number of non-induced subgraphs of a size k graphlet that are isomorphic to another size k graphlet, an example of this calculation can be found in Table 5 . Using this number and the proportion of edges sampled to form the smaller network, the authors compute the probability that a subgraph in the "RESampled Graph" is isomorphic to another subgraph in the original graph. Then, an exact counting algorithm is applied to the "RESampled Graph" and by composing the results from this algorithm with the aforementioned probability, the subgraph concentrations in the original network are estimated.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Path Sampling <s> Counting the frequency of small subgraphs is a fundamental technique in network analysis across various domains, most notably in bioinformatics and social networks. The special case of triangle counting has received much attention. Getting results for 4-vertex patterns is highly challenging, and there are few practical results known that can scale to massive sizes. Indeed, even a highly tuned enumeration code takes more than a day on a graph with millions of edges. Most previous work that runs for truly massive graphs employ clusters and massive parallelization. We provide a sampling algorithm that provably and accurately approximates the frequencies of all 4-vertex pattern subgraphs. Our algorithm is based on a novel technique of 3-path sampling and a special pruning scheme to decrease the variance in estimates. We provide theoretical proofs for the accuracy of our algorithm, and give formal bounds for the error and confidence of our estimates. We perform a detailed empirical study and show that our algorithm provides estimates within 1% relative error for all subpatterns (over a large class of test graphs), while being orders of magnitude faster than enumeration and other sampling based algorithms. Our algorithm takes less than a minute (on a single commodity machine) to process an Orkut social network with 300 million edges. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Path Sampling <s> Counting 3-, 4-, and 5-node graphlets in graphs is important for graph mining applications such as discovering abnormal/evolution patterns in social and biology networks. In addition, it is recently widely used for computing similarities between graphs and graph classification applications such as protein function prediction and malware detection. However, it is challenging to compute these graphlet counts for a large graph or a large set of graphs due to the combinatorial nature of the problem. Despite recent efforts in counting 3-node and 4-node graphlets, little attention has been paid to characterizing 5-node graphlets. In this paper, we develop a computationally efficient sampling method to estimate 5-node graphlet counts. We not only provide a fast sampling method and unbiased estimators of graphlet counts, but also derive simple yet exact formulas for the variances of the estimators which are of great value in practice—the variances can be used to bound the estimates’ errors and determine the smallest necessary sampling budget for a desired accuracy. We conduct experiments on a variety of real-world datasets, and the results show that our method is several orders of magnitude faster than the state-of-the-art methods with the same accuracy. <s> BIB002
This family of algorithms relies on the idea of sampling path subgraphs to estimate the frequencies of the other subgraphs. Path subgraphs are composed by 2 exterior nodes and k − 2 interior nodes (where k is the size of the subgraph) arranged in a single line; the interior nodes all have degree of 2, while the exterior nodes have degree of 1. Examples of these are the subgraphs G 1 , G 3 and G 9 in Figure 6 . The main idea for these algorithms, mainly for k ≥ 4, is relating the number of non-induced occurrences of each subgraph of size k in the other size k subgraph. For example, when k = 4, there are 4 non-induced occurrences of G 3 in G 5 or 12 non-induced occurrences of G 3 in G 8 . Seshadhri et al. introduced the idea of wedge sampling, where wedges denote size 3 path subgraphs. The premise of the algorithm is simple, they select a number of wedges uniformly at random and check whether they are closed or not. The fraction of closed wedges sampled is an estimation from for the clustering coefficient, from which the number of triangles can be derived. Building on the idea of wedge sampling, Jha et al. BIB001 propose path sampling to estimate the frequency of size 4 graphlets. The main primitive of the algorithm is sampling non-induced occurrences of G 3 and determining which graphlet is induced by that sample. The estimator relies on both the number of induced subgraphs counted via the sampling and information contained in Table 5 . Finally, the authors determine an equation to count the number of stars with 4 nodes (G 4 ) based on the frequencies of each other graphlet, since G 4 does not have any non-induced occurrence of G 3 . Applying the same concepts to size 5 subgraphs, Wang et al. BIB002 present MOSS-5. For size 5, sampling paths is not enough to estimate the frequencies of all different subgraphs, as there are 3 subgraphs that do not have a non-induced occurrence of a path: G 10 , G 11 and G 14 . On the other hand, G 11 does not have a non-induced occurrence in 3 subgraphs as well (G 9 , G 10 and G 15 ). Using this knowledge, the authors create an algorithm divided in two parts: first it samples non-induced size 5 paths (G 9 ), similarly to Jha et al. BIB001 , and then they repeat the procedure but sampling occurrences of G 11 instead. Combining the results from these two sampling schemes, the authors are able to estimate the frequency of every size 5 subgraph. To the best of our knowledge, MOSS-5 is the algorithm that achieves the best trade-off of accuracy and time to estimate the frequency of 5-subgraphs, as it is able to reach very small errors (magnitude 10 −2 ) with a very limited number of samples, even for big networks. However the ideas behind MOSS-5 are not easily extendable to directed subgraphs and larger sized undirected subgraphs due to the ever increasing number of dependencies between the number of non-induced occurrences, making it harder to use the information contained in a table similar to Table 5 for these cases.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Summary: Biological and engineered networks have recently been shown to display network motifs: a small set of characteristic patterns that occur much more frequently than in randomized networks with the same degree sequence. Network motifs were demonstrated to play key information processing roles in biological regulation networks. Existing algorithms for detecting network motifs act by exhaustively enumerating all subgraphs with a given number of nodes in the network. The runtime of such algorithms increases strongly with network size. Here, we present a novel algorithm that allows estimation of subgraph concentrations and detection of network motifs at a runtime that is asymptotically independent of the network size. This algorithm is based on random sampling of subgraphs. Network motifs are detected with a surprisingly small number of samples in a wide variety of networks. Our method can be applied to estimate the concentrations of larger subgraphs in larger networks than was previously possible with exhaustive enumeration algorithms. We present results for high-order motifs in several biological networks and discuss their possible functions. ::: ::: Availability: A software tool for estimating subgraph concentrations and detecting network motifs (mfinder 1.1) and further information is available at http://www.weizmann.ac.il/mcb/UriAlon/ <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Graphlet frequency distribution (GFD) has recently become popular for characterizing large networks. However, the computation of GFD for a network requires the exact count of embedded graphlets in that network, which is a computationally expensive task. As a result, it is practically infeasible to compute the GFD for even a moderately large network. In this paper, we propose GUISE, which uses a Markov Chain Monte Carlo (MCMC) sampling method for constructing the approximate GFD of a large network. Our experiments on networks with millions of nodes show that GUISE obtains the GFD within few minutes, whereas the exhaustive counting based approach takes several days. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Exploring statistics of locally connected subgraph patterns (also known as network motifs) has helped researchers better understand the structure and function of biological and Online Social Networks (OSNs). Nowadays, the massive size of some critical networks—often stored in already overloaded relational databases—effectively limits the rate at which nodes and edges can be explored, making it a challenge to accurately discover subgraph statistics. In this work, we propose sampling methods to accurately estimate subgraph statistics from as few queried nodes as possible. We present sampling algorithms that efficiently and accurately estimate subgraph properties of massive networks. Our algorithms require no precomputation or complete network topology information. At the same time, we provide theoretical guarantees of convergence. We perform experiments using widely known datasets and show that, for the same accuracy, our algorithms require an order of magnitude less queries (samples) than the current state-of-the-art algorithms. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Scientists have shown that network motifs are key building block of various biological networks. Most of the existing exact methods for finding network motifs are inefficient simply due to the inherent complexity of this task. In recent years, researchers are considering approximate methods that save computation by sacrificing exact counting of the frequency of potential motifs. However, these methods are also slow when one considers the motifs of larger size. In this work, we propose two methods for approximate motif finding, namely SRW-rw, and MHRW based on Markov Chain Monte Carlo (MCMC) sampling. Both the methods are significantly faster than the best of the existing methods, with comparable or better accuracy. Further, as the motif size grows the complexity of the proposed methods grows linearly. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Algorithms for mining very large graphs, such as those representing online social networks, to discover the relative frequency of small subgraphs within them are of high interest to sociologists, computer scientists and marketeers alike. However, the computation of these network motif statistics via naive enumeration is infeasible for either its prohibitive computational costs or access restrictions on the full graph data. Methods to estimate the motif statistics based on random walks by sampling only a small fraction of the subgraphs in the large graph address both of these challenges. In this paper, we present a new algorithm, called the Waddling Random Walk (WRW), which estimates the concentration of motifs of any size. It derives its name from the fact that it sways a little to the left and to the right, thus also sampling nodes not directly on the path of the random walk. The WRW algorithm achieves its computational efficiency by not trying to enumerate subgraphs around the random walk but instead using a randomized protocol to sample subgraphs in the neighborhood of the nodes visited by the walk. In addition, WRW achieves significantly higher accuracy (measured by the closeness of its estimate to the correct value) and higher precision (measured by the low variance in its estimations) than the current state-of-the-art algorithms for mining subgraph statistics. We illustrate these advantages in speed, accuracy and precision using simulations on well-known and widely used graph datasets representing real networks. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Graphlets are induced subgraph patterns and have been frequently applied to characterize the local topology structures of graphs across various domains, e.g., online social networks (OSNs) and biological networks. Discovering and computing graphlet statistics are highly challenging. First, the massive size of real-world graphs makes the exact computation of graphlets extremely expensive. Secondly, the graph topology may not be readily available so one has to resort to web crawling using the available application programming interfaces (APIs). In this work, we propose a general and novel framework to estimate graphlet statistics of "any size". Our framework is based on collecting samples through consecutive steps of random walks. We derive an analytical bound on the sample size (via the Chernoff-Hoeffding technique) to guarantee the convergence of our unbiased estimator. To further improve the accuracy, we introduce two novel optimization techniques to reduce the lower bound on the sample size. Experimental evaluations demonstrate that our methods outperform the state-of-the-art method up to an order of magnitude both in terms of accuracy and time cost. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Random Walk <s> Mining graphlet statistics is very meaningful due to its wide applications in social networks, bioinformatics and information security, etc. However, it is a big challenge to exactly count graphlet statistics as the number of subgraphs exponentially increases with the graph size, so sampling algorithms are widely used to estimate graphlet statistics within reasonable time. However, existing sampling algorithms are not scalable for large graphlets, e.g., they may get stuck when estimating graphlets with more than five nodes. To address this issue, we propose a highly scalable algorithm, Scalable subgraph Sampling via Random Walk (SSRW), for graphlet counts and concentrations. SSRW samples graphlets by generating new nodes from the neighbors of previously visited nodes instead of fixed ones. Thanks to this flexibility, we can generate any k-graphlets in a unified way and estimate statistics of k-graphlet efficiently even for large k. Our extensive experiments on estimating counts and concentrations of \(\{4,5,6,7\}\)-graphlets show that SSRW algorithm is scalable, accurate and fast. <s> BIB007
A random walk in a graph G is a sequence of nodes, R, of the form R = (n 1 , n 2 , . . .), where n 1 is the seed node and n i the ith node visited in the walk. A random walk can also be seen as a Markov chain. We identify two main approaches to sample subgraphs using random walks. The first is incrementing the size of the walk until a sequence of k distinct nodes is drawn, forming a k-subgraph, which is then identified by an isomorphism test. The second approach is considering a graph of relationships between subgraphs, where two subgraphs are connected if one can be obtained from the other by adding or removing a node or by adding or removing an edge. A random walk is then performed on this graph instead of on the original one. Kashtan et al. BIB001 , in their seminal work commonly called ESA (Edge Sampling), implemented one of the first subgraph sampling methods in the MFinder software. The authors propose to do a random walk on the graph, sampling one edge at a time until a set of k nodes is found, from which the subgraph induced by that set of nodes is discovered. This method resulted in a biased estimator. To correct the bias, the authors propose to re-weight the sample, which takes exponential time in the size of the subgraphs. Bhuiyan et al. BIB002 develop GUISE that computes the graphlet degree distribution for subgraphs of size 3, 4 and 5 in undirected networks. The algorithm is based on Monte Carlo Markov Chain (MCMC) sampling. It works by sampling a seed graphlet, calculating its neighbourhood (a set of other graphlets), picking one randomly and calculating an acceptance probability to transition to this new graphlet. This process is then repeated until a predefined number of samples is taken from the graph. The neighbourhood of a graphlet is similar to the graph of relationships previously mentioned, but to obtain a k-graphlet from another k-graphlet, a node from the original one is removed and, if the remaining k − 1 nodes are connected, their adjacency lists are concatenated and nodes are picked from there to form the new k-graphlet. A similar approach to GUISE is used by Saha and Al Hasan BIB004 , where MCMC sampling is also used to compute subgraph concentration. A difference to GUISE is that the size of graphlets is theoretically unbound and only a specific size k is counted, whereas GUISE counts graphlets of size 3, 4 and 5 simultaneously. They also suggest a modified version where the acceptance probability is always one (that is, there is always a transition to the new subgraph), which introduces a bias towards graphlets with nodes with a high degree. In turn, they propose an estimator that re-weights the concentration to remove this bias. Wang et al. BIB003 propose a random walk based method to estimate subgraph concentrations that aims to improve on the approach taken by GUISE. The main improvement over GUISE is that no samples are rejected, avoiding a cost of sampling without any gain of information. The authors use a graph of relationships between connected induced subgraphs, where two k-subgraphs are connected if they share k − 1 nodes, but this graph is not explicitly built, reducing memory costs. The basic algorithm is just a simple random walk over this graph of relationships. The authors also present two improvements: Pairwise Subgraph Random Walk (PSRW), estimates size k subgraph by looking at the graph of relationships composed by k − 1-subgraphs; Mixed Subgraph Sampling (MSS), estimates subgraphs of size k − 1, k and k + 1 simultaneously. Han and Sethu BIB005 present an algorithm to estimate subgraph concentration based on random walks. Their algorithm, Waddling Random Walk (WRW), gets its name from how the random walk is performed, allowing sampling of nodes not only on the path of the walk, but also query random nodes in the neighbourhood. Let l be the number of vertices (with repetition) in the shortest path of a particular k-graphlet. The goal of the waddling is to reduce the number of steps the walk has to take to identify graphlets with l > k. While executing a random walk to identify a k-subgraph, the waddling approach limits the number of nodes explored to the size of the subgraph, k. Chen and Lui propose a random walk based algorithm to estimate graphlet counts in online social networks, which are often restricted and the entire topology is hidden behind a prohibitive query cost. With this context in mind, the authors introduced the concepts of touched and visible subgraphs. The former are subgraphs composed of vertices whose neighbourhood is accessible. The latter possess one and only one vertex with inaccessible neighbourhood. Their method, IMPR, works by generating k − 1-node touched subgraphs via random walk and combining them with their node's neighbourhood for obtain k-node visible subgraphs, which form the k-node samples. Chen et al. BIB006 introduce a new framework that incorporates PSRW as a special case. To sample k-subgraphs, the authors also use a graph of relationships between connected induced d-subgraphs, d ∈ {1, .., k − 1}, and perform a random walk over this graph. The difference to PSRW is that PSRW only uses d as k − 1, which becomes ineffective as k grows to larger sizes. The authors also augment this method of sampling with a different re-weight coefficient to improve estimation accuracy and add non-backtracking random walks, which eliminates invalid states in the Markov Chain that do not contribute to the estimation. Yang et al. BIB007 introduce another algorithm using random walks, Scalable subgraph Sampling via Random Walk (SSRW), able to compute both frequencies and concentrations of undirected subgraphs of size up to 7. The next nodes in the random walk are picked from the concatenation of the neighbourhoods of all nodes previously selected to be a part of the sampled subgraph. The authors present an unbiased estimator and compare it against Chen et al. BIB006 and Han and Sethu BIB005 , getting better results than both for the single network tested.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Colour Coding <s> Identifying motifs (or commonly occurring subgraphs/templates) has been found to be useful in a number of applications, such as biological and social networks; they have been used to identify building blocks and functional properties, as well as to characterize the underlying networks. Enumerating subgraphs is a challenging computational problem, and all prior results have considered networks with a few thousand nodes. In this paper, we develop a parallel subgraph enumeration algorithm, ParSE, that scales to networks with millions of nodes. Our algorithm is a randomized approximation scheme, that estimates the subgraph frequency to any desired level of accuracy, and allows enumeration of a class of motifs that extends those considered in prior work. Our approach is based on parallelization of an approach called color coding, combined with a stream based partitioning. We also show that ParSE scales well with the number of processors, over a large range. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Colour Coding <s> Relational sub graph analysis, e.g. finding labeled sub graphs in a network, which are isomorphic to a template, is a key problem in many graph related applications. It is computationally challenging for large networks and complex templates. In this paper, we develop SAHAD, an algorithm for relational sub graph analysis using Hadoop, in which the sub graph is in the form of a tree. SAHAD is able to solve a variety of problems closely related with sub graph isomorphism, including counting labeled/unlabeled sub graphs, finding supervised motifs, and computing graph let frequency distribution. We prove that the worst case work complexity for SAHAD is asymptotically very close to that of the best sequential algorithm. On a mid-size cluster with about 40 compute nodes, SAHAD scales to networks with up to 9 million nodes and a quarter billion edges, and templates with up to 12 nodes. To the best of our knowledge, SAHAD is the first such Hadoop based subgraph/subtree analysis algorithm, and performs significantly better than prior approaches for very large graphs and templates. Another unique aspect is that SAHAD is also amenable to running quite easily on Amazon EC2, without needs for any system level optimization. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Colour Coding <s> We present a new shared-memory parallel algorithm and implementation called FASCIA for the problems of approximate sub graph counting and sub graph enumeration. The problem of sub graph counting refers to determining the frequency of occurrence of a given sub graph (or template) within a large network. This is a key graph analytic with applications in various domains. In bioinformatics, sub graph counting is used to detect and characterize local structure (motifs) in protein interaction networks. Exhaustive enumeration and exact counting is extremely compute-intensive, with running time growing exponentially with the number of vertices in the template. In this work, we apply the color coding technique to determine approximate counts of non-induced occurrences of the sub graph in the original network. Color coding gives a fixed-parameter algorithm for this problem, using a dynamic programming-based counting approach. Our new contributions are a multilevel shared-memory parallelization of the counting scheme and several optimizations to reduce the memory footprint. We show that approximate counts can be obtained for templates with up to 12 vertices, on networks with up to millions of vertices and edges. Prior work on this problem has only considered out-of-core parallelization on distributed platforms. With our new counting scheme, data layout optimizations, and multicore parallelism, we demonstrate a significant speedup over the current state-of-the-art for sub graph counting. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Colour Coding <s> Counting graphlets is a well-studied problem in graph mining and social network analysis. Recently, several papers explored very simple and natural algorithms based on Monte Carlo sampling of Markov Chains (MC), and reported encouraging results. We show, perhaps surprisingly, that such algorithms are outperformed by color coding (CC) [2], a sophisticated algorithmic technique that we extend to the case of graphlet sampling and for which we prove strong statistical guarantees. Our computational experiments on graphs with millions of nodes show CC to be more accurate than MC; furthermore, we formally show that the mixing time of the MC approach is too high in general, even when the input graph has high conductance. All this comes at a price however. While MC is very efficient in terms of space, CC’s memory requirements become demanding when the size of the input graph and that of the graphlets grow. And yet, our experiments show that CC can push the limits of the state-of-the-art, both in terms of the size of the input graph and of that of the graphlets. <s> BIB004
The technique of colour coding has been adapted to the problem of approximating subgraph frequencies by Zhao et al. BIB001 , Zhao et al. BIB002 and Slota and Madduri BIB003 . However, all these works focus on specific categories of subgraphs, for example, SAHad BIB002 attempts to only find subgraphs that are in the form of a tree. More recently, Bressan et al. BIB004 present a general algorithm using colour coding, that works for any undirected subgraph of size theoretically unbound. The algorithm works in two phases. The first, based on the original description of , is counting the number of non-induced trees, treelets, in the graph but with a particularity, the nodes were previously partitioned into k sets and attributed a label (a colour). These treelets then must be constituted solely of nodes with different colours. This part of the algorithm outputs counters C(T , S, v), for every v ∈ V (G), which are the number of treelets rooted in v isomorphic to T , whose colours span the colour set S. The second phase of the algorithm is the sampling part, which is focused on sampling treelets uniformly at random. To pick a treelet with k nodes, the authors choose a random node v, a treelet T with probability proportional to C (T , [k] , v) and then pick one of the treelets that is rooted in v, is isomorphic to T and is coloured by [k] . Given a treelet T k , the authors consider the graphlet G k induced by the nodes of T k and increment its frequency by 1 σ (G k ) , where σ (G k ) is the number of spanning trees of G k .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. ::: ::: Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data, scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. ::: ::: Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented and upwards of one thousand MapReduce jobs are executed on Google's clusters every day. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Network motifs have been demonstrated to be the building blocks in many biological networks such as transcriptional regulatory networks. Finding network motifs plays a key role in understanding system level functions and design principles of molecular interactions. In this paper, we present a novel definition of the neighborhood of a node. Based on this concept, we formally define and present an effective algorithm for finding network motifs. The method seeks a neighborhood assignment for each node such that the induced neighborhoods are partitioned with no overlap. We then present a parallel algorithm to find network motifs using a parallel cluster. The algorithm is applied on an E. coli transcriptional regulatory network to find motifs with size up to six. Compared with previous algorithms, our algorithm performs better in terms of running time and precision. Based on the motifs that are found in the network, we further analyze the topology and coverage of the motifs. The results suggest that a small number of key motifs can form the motifs of a bigger size. Also, some motifs exhibit a correlation with complex functions. This study presents a framework for detecting the most significant recurring subgraph patterns in transcriptional regulatory networks. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Motifs in a network are small connected subnetworks that occur in significantly higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Kashtan et al. [Bioinformatics, 2004] proposed a sampling algorithm for efficiently performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from sampling bias and is only efficient when the motifs are small (3 or 4 nodes). Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Experiments on a testbed of biological networks show our algorithm to be orders of magnitude faster than previous approaches. This allows for the detection of larger motifs in bigger networks than was previously possible, facilitating deeper insight into the field. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> The study of biological networks and network motifs can yield significant new insights into systems biology. Previous methods of discovering network motifs - network-centric subgraph enumeration and sampling - have been limited to motifs of 6 to 8 nodes, revealing only the smallest network components. New methods are necessary to identify larger network sub-structures and functional motifs. ::: ::: Here we present a novel algorithm for discovering large network motifs that achieves these goals, based on a novel symmetry-breaking technique, which eliminates repeated isomorphism testing, leading to an exponential speed-up over previous methods. This technique is made possible by reversing the traditional network-based search at the heart of the algorithm to a motif-based search, which also eliminates the need to store all motifs of a given size and enables parallelization and scaling. Additionally, our method enables us to study the clustering properties of discovered motifs, revealing even larger network elements. ::: ::: We apply this algorithm to the protein-protein interaction network and transcription regulatory network of S. cerevisiae, and discover several large network motifs, which were previously inaccessible to existing methods, including a 29-node cluster of 15-node motifs corresponding to the key transcription machinery of S. cerevisiae. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> We introduce GPUMiner, a novel parallel data mining system that utilizes new-generation graphics processing units (GPUs). Our system relies on the massively multi-threaded SIMD (Single Instruction, Multiple-Data) architecture provided by GPUs. As specialpurpose co-processors, these processors are highly optimized for graphics rendering and rely on the CPU for data input/output as well as complex program control. Therefore, we design GPUMiner to consist of the following three components: (1) a CPU-based storage and buffer manager to handle I/O and data transfer between the CPU and the GPU, (2) a GPU-CPU co-processing parallel mining module, and (3) a GPU-based mining visualization module. We design the GPU-CPU co-processing scheme in mining depending on the complexity and inherent parallelism of individual mining algorithms. We provide the visualization module to facilitate users to observe and interact with the mining process online. We have implemented the k-means clustering and the Apriori frequent pattern mining algorithms in GPUMiner. Our preliminary results have shown significant speedups over state-of-the-art CPU implementations on a PC with a G80 GPU and a quad-core CPU. We will demonstrate the mining process through our visualization module. Code and documentation of GPUMiner are available at http://code.google.com/p/gpuminer/. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Finding and counting the occurrences of a collection of subgraphs within another larger network is a computationally hard problem, closely related to graph isomorphism. The subgraph count is by itself a very powerful characterization of a network and it is crucial for other important network measurements. G-tries are a specialized data-structure designed to store and search for subgraphs. By taking advantage of subgraph common substructure, g-tries can provide considerable speedups over previously used methods. In this paper we present a parallel algorithm based precisely on g-tries that is able to efficiently find and count subgraphs. The algorithm relies on randomized receiver-initiated dynamic load balancing and is able to stop its computation at any given time, efficiently store its search position, divide what is left to compute in two halfs, and resume from where it left. We apply our algorithm to several representative real complex networks from various domains and examine its scalability. We obtain an almost linear speedup up to 128 processors, thus allowing us to reach previously unfeasible limits. We showcase the multidisciplinary potential of the algorithm by also applying it to network motif discovery. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Graphs are a fundamental data representation that has been used extensively in various domains. In graph-based applications, a systematic exploration of the graph such as a breadth-first search (BFS) often serves as a key component in the processing of their massive data sets. In this paper, we present a new method for implementing the parallel BFS algorithm on multi-core CPUs which exploits a fundamental property of randomly shaped real-world graph instances. By utilizing memory bandwidth more efficiently, our method shows improved performance over the current state-of-the-art implementation and increases its advantage as the size of the graph increases. We then propose a hybrid method which, for each level of the BFS algorithm, dynamically chooses the best implementation from: a sequential execution, two different methods of multicore execution, and a GPU execution. Such a hybrid approach provides the best performance for each graph size while avoiding poor worst-case performance on high-diameter graphs. Finally, we study the effects of the underlying architecture on BFS performance by comparing multiple CPU and GPU systems, a high-end GPU system performed as well as a quad-socket high-end CPU system. <s> BIB008 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Breadth-first search (BFS) is a core primitive for graph traversal and a basis for many higher-level graph analysis algorithms. It is also representative of a class of parallel computations whose memory accesses and work distribution are both irregular and data-dependent. Recent work has demonstrated the plausibility of GPU sparse graph traversal, but has tended to focus on asymptotically inefficient algorithms that perform poorly on graphs with non-trivial diameter. We present a BFS parallelization focused on fine-grained task management constructed from efficient prefix sum that achieves an asymptotically optimal O(|V|+|E|) work complexity. Our implementation delivers excellent performance on diverse graphs, achieving traversal rates in excess of 3.3 billion and 8.3 billion traversed edges per second using single and quad-GPU configurations, respectively. This level of performance is several times faster than state-of-the-art implementations both CPU and GPU platforms. <s> BIB009 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB010 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB011 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Enumerating all subgraphs of an input graph is an important task for analyzing complex networks. Valuable information can be extracted about the characteristics of the input graph using all-subgraph enumeration. Not withstanding, the number of subgraphs grows exponentially with growth of the input graph or by increasing the size of the subgraphs to be enumerated. Hence, all-subgraph enumeration is very time consuming when the size of the subgraphs or the input graph is big. We propose a parallel solution named Subenum which in contrast to available solutions can perform much faster. Subenum enumerates subgraphs using edges instead of vertices, and this approach leads to a parallel and load-balanced enumeration algorithm that can have efficient execution on current multicore and multiprocessor machines. Also, Subenum uses a fast heuristic which can effectively accelerate nonisomorphism subgraph enumeration. Subenum can efficiently use external memory, and unlike other subgraph enumeration methods, it is not associated with the main memory limits of the used machine. Hence, Subenum can handle large input graphs and subgraph sizes that other solutions cannot handle. Several experiments are done using real-world input graphs. Compared to the available solutions, Subenum can enumerate subgraphs several orders of magnitude faster and the experimental results show that the performance of Subenum scales almost linearly by using additional processor cores. <s> BIB012 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB013 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> The identification of network motifs has important applications in numerous domains, such as pattern detection in biological networks and graph analysis in digital circuits. However, mining network motifs is computationally challenging, as it requires enumerating subgraphs from a real-life graph, and computing the frequency of each subgraph in a large number of random graphs. In particular, existing solutions often require days to derive network motifs from biological networks with only a few thousand vertices. To address this problem, this paper presents a novel study on network motif discovery using Graphical Processing Units (GPUs). The basic idea is to employ GPUs to parallelize a large number of subgraph matching tasks in computing subgraph frequencies from random graphs, so as to reduce the overall computation time of network motif discovery. We explore the design space of GPU-based subgraph matching algorithms, with careful analysis of several crucial factors that affect the performance of GPU programs. Based on our analysis, we develop a GPU-based solution that (i) considerably differs from existing CPU-based methods, and (ii) exploits the strengths of GPUs in terms of parallelism while mitigating their limitations in terms of the computation power per GPU core. With extensive experiments on a variety of biological networks, we show that our solution is up to two orders of magnitude faster than the best CPU-based approach, and is around 20 times more cost-effective than the latter, when taking into account the monetary costs of the CPU and GPUs used. <s> BIB014 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB015 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Processing large complex networks like social networks or web graphs has recently attracted considerable interest. In order to do this in parallel, we need to partition them into pieces of about equal size. Unfortunately, previous parallel graph partitioners originally developed for more regular mesh-like networks do not work well for these networks. This paper addresses this problem by parallelizing and adapting the label propagation technique originally developed for graph clustering. By introducing size constraints, label propagation becomes applicable for both the coarsening and the refinement phase of multilevel graph partitioning. We obtain very high quality by applying a highly parallel evolutionary algorithm to the coarsened graph. The resulting system is both more scalable and achieves higher quality than state-of-the-art systems like ParMetis or PT-Scotch. For large complex networks the performance differences are very big. For example, our algorithm can partition a web graph with 3.3 billion edges in less than sixteen seconds using 512 cores of a high performance cluster while producing a high quality partition -- none of the competing systems can handle this graph on our system. <s> BIB016 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Historical Overview <s> Networks are powerful in representing a wide variety of systems in many fields of study. Networks are composed of smaller substructures (subgraphs) that characterize them and give important information related to their topology and functionality. Therefore, discovering and counting these subgraph patterns is very important towards mining the features of networks. Algorithmically, subgraph counting in a network is a computationally hard problem and the needed execution time grows exponentially as the size of the subgraph or the network increases. The main goal of this paper is to contribute towards subgraph search, by providing an accessible and scalable parallel methodology for counting subgraphs. For that we present a dynamic iterative MapReduce strategy to parallelize algorithms that induce an unbalanced search tree, and apply it in the subgraph counting realm. At the core of our methods lies the g-trie, a state-of-the-art data structure that was created precisely for this task. Our strategy employs an adaptive time threshold and an efficient work-sharing mechanism to dynamically do load balancing between the workers. We evaluate our implementations using Spark on a large set of representative complex networks from different fields. The results obtained are very promising and we achieved a consistent and almost linear speedup up to 32 cores, with an average efficiency close to 80+. To the best of our knowledge this is the fastest and most scalable method for subgraph counting within the MapReduce programming model. <s> BIB017
One key aspect necessary to achieve a scalable parallel computation is finding a balanced work division (i.e., splitting work-units evenly between workers -parallel processors/threads). A naive possibility for subgraph counting is to assign |V (G) | |P | nodes from network G to each worker p ∈ P. This egalitarian division is a poor choice since two nodes induce very different search spaces; for instance, hub-like nodes induce many more subgraph occurrences than nearly-isolated nodes. Instead of performing an egalitarian division, Wang et al. BIB002 discriminate nodes by their degree and distribute them among workers, the idea being that each worker gets roughly the same amount of hard and easy work-units. Despite achieving a more balanced division than the naive version, there is still no guarantee that the node-degree is sufficient to determine the actual complexity of the work-unit. Distributing work immediately (without runtime adjustments) is called a static division. Wang et al. did not assess scalability in BIB002 , but they showed that their parallel algorithm was faster than Mfinder in an E. Coli transcriptional regulation network. Since their method was not named, we refer to it as ParWang henceforth. The first parallel strategy with a single-subgraph-search algorithm at its core, namely Grochow BIB004 , was by Schatz et al. . Since the algorithm was not named, and it targets a distributed memory (DM) architecture (i.e., parallel cluster), we refer to it as DM-Grochow. In order to distribute query subgraphs (also called isoclasses) among workers they employed two strategies: naive and first-fit. The naive strategy is similar to ParWang's. In the first-fit model, each slave processor requests a subgraph type (or isoclass) from the master and enumerates all occurrences of that type (e.g., cliques, stars, chains). This division is dynamic, as opposed to static, but it is not balanced since different isoclasses induce very different search trees. For instance, in sparse networks k-cliques are faster to compute than k-chains. Using 64 cores, Schatz et al. obtained ≈10-15x speedups over the sequential version on a yeast PPI network. They also tried another novel approach by partitioning the network instead of partitioning the subgraph-set. However, finding adequate partitions for subgraph counting is a very hard problem due to partition overlaps and subgraphs traversing different partitions, and no speedup was obtained using this strategy. We should note that parallel graph partitioning remains an active research problem to this day BIB016 , but is out of the scope of this work. All parallel algorithms mentioned so far traverse occurrences in a depth-first (DFS) fashion, since doing so avoids having to store intermediate states. By contrast, Liu et al. BIB006 use a breadthfirst search (BFS) where, at each step, all subgraph occurrences found in the previous one are expanded by one node. Their algorithm, MPRF, is implemented following a MapReduce model BIB001 which is intrinsically a BFS-like framework. In MPRF, mappers extend size k occurrences to size k + 1 and reducers remove repeated occurrences. At each BFS-level, MPRF divides work-units evenly among workers. We still consider this to be a static division since no adjustments are made in runtime. Thus, in our terminology, static divisions can be performed only once (at the start of computation in DFS-like algorithms) or multiple times (once per level in BFS-like algorithms). Overhead caused by reading and writing to files reduces MRPF's efficiency, but the authors report speedups of ≈ 7x on a 48-node cluster, when compared to the execution on a single-processor. DFS-based algorithms discussed so far either perform a complete work-division right at the beginning (ParWang), or they perform a partial work-division at the beginning and then workers request work when idle (DM-Grochow). In both cases, a worker has to finish a work-unit before proceeding a new one. Therefore, it is possible that a worker gets stuck processing a very computationally heavy work-unit while all the others are idle. This has to do with work-unit granularity: work-units at the top of the DFS search space have high (coarse) granularity since the algorithm has to explore a large search space. BFS-based algorithms mitigate this problem because work-units are much more fine grained (usually a worker only extends his work-unit(s) by one node). The work by Ribeiro et al. was the first to implement work sharing during parallel subgraph counting, alleviating the problem of coarse work-unit granularity of DFS-based subgraph counting algorithms. Workers have a splitting threshold that dictates how likely it is to, instead of fully processing a work-unit, putting part of it in a global work queue. A work-unit is divided using diagonal work splitting which gathers unprocessed nodes at level k (i.e., nodes that are reached by expanding the current work-unit) and recursively goes up in the search tree, also gathering unprocessed nodes of level k − i, i < k, until reaching level 1. This process results in a set of finer-grained work-units that induces a more balanced search space than static and first-fit divisions. In Ribeiro et al. use ESU as their core enumeration algorithm and propose a master-worker (M-W) architecture where a master-node manages a work-queue and distributes its work-units among slave workers. This strategy, DM-ESU, was the first to achieve near-linear speedups (≈128x on a 128-node cluster) on a set of heterogeneous network. A subsequent version BIB007 used GTries as their base algorithm and implemented a worker-worker (W-W) architecture where workers perform work stealing. DM-Gtries improves upon DM-ESU by using a faster enumeration algorithm (GTries) and having all workers perform subgraph enumeration (without wasting a node in work queue management). Similar implementations (based on W-W sharing and diagonal splitting) of GTries and FASE were also developed for shared memory (SM) environments, which achieved near-linear speedups in a 64-core machine BIB010 BIB011 . The main advantages of SM implementations is that work sharing is faster (since no message passing is necessary) and SM architectures (such as multicores) are a commodity while DM architectures (such as a cluster) are not. Instead of developing efficient work sharing strategies, Shahrivari and Jalili BIB012 try to avoid the unbalanced computation induced by vertice-based work-unit division. Subenum is an adaptation of ESU which uses edges as starting work-units, achieving near-linear speedup (≈10x on a 12-core machine). Using edges as starting work-units is also more suitable for the MapReduce model since edges are finer-grained work-units than vertices. In a follow-up work BIB013 , Shahrivari and Jalili propose a MapReduce algorithm, MRSUB, which greatly improves upon BIB006 , reporting a speedup of ≈ 34x on a 40-core machine. Like Subenum, MRSUB does not support work sharing between workers. A MapReduce algorithm with work sharing was put forward by Naser-eddin and Ribeiro BIB017 , henceforth called MR-Gtries. Using work sharing with timed redistribution (i.e., after a certain time, every worker stops and work is fully redistributed), they report a speedup of ≈ 26x on a 32-core machine. While MRSUB and MR-GTries efficiency is comparable (≈ 80%), the latter has a much faster sequential algorithm at its core; therefore, in terms of absolute runtime, MR-Gtries is the fastest MapReduce subgraph counting algorithm that we know of. Graphics processing units (GPUs) are processors specialized in image generation, but numerous general purpose tasks have been adapted to them BIB005 BIB008 BIB009 . GPUs are appealing due to their large number of cores, reaching hundreds or thousands of parallel threads whereas commodity multicores typically have no more than a dozen. However, algorithms that rely on graph traversal are not best suited for the GPU framework due to branching code, non-coalesced memory accesses and coarse work-unit granularity BIB009 . Milinković et al. were one of the firsts to follow a GPU approach (GPU-Orca), with limited success. Lin et al. BIB014 put forward a GPU algorithm (henceforth refereed to as Lin since it was unnamed) mostly targeted at network motif discovery but also with some emphasis on efficient subgraph enumeration. Lin avoids duplicate in a similar fashion to ESU BIB003 and auxiliary arrays are used to mitigate uncoalesced memory accesses. A BFS-style traversal is used (extending each subgraph 1 node at a time) to better balance workunits among threads. They compare Lin running on a 2496-core GPU (Tesla K20) against parallel CPU algorithms and report a speedup of ≈10x to a 6-core execution of the fastest CPU algorithm, DM-GTries. Rossi and Zhou proposed the first algorithm that combines multiple GPUs and CPUs BIB015 . Their method dynamically distributes work between CPUs and GPUs, where unbalanced computation is given to the CPU whereas GPUs compute the more regular work-units. Since their method was not named, we refer to it as GPU-PGD. Their hybrid CPU-GPU version achieves speedups of ≈ 20x to ≈ 200x when compared to sequential PGD, depending largely on the network. As mentioned in Section 3, PGD is one of the fastest methods for sequential subgraph counting. As such, GPU-PGD is the fastest subgraph counting algorithm currently available as far as we know. However, GPU-PGD is limited to 4-node subgraphs, while DM-GTries is the fastest general approach.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Distributed Memory (DM). <s> Finding and counting the occurrences of a collection of subgraphs within another larger network is a computationally hard problem, closely related to graph isomorphism. The subgraph count is by itself a very powerful characterization of a network and it is crucial for other important network measurements. G-tries are a specialized data-structure designed to store and search for subgraphs. By taking advantage of subgraph common substructure, g-tries can provide considerable speedups over previously used methods. In this paper we present a parallel algorithm based precisely on g-tries that is able to efficiently find and count subgraphs. The algorithm relies on randomized receiver-initiated dynamic load balancing and is able to stop its computation at any given time, efficiently store its search position, divide what is left to compute in two halfs, and resume from where it left. We apply our algorithm to several representative real complex networks from various domains and examine its scalability. We obtain an almost linear speedup up to 128 processors, thus allowing us to reach previously unfeasible limits. We showcase the multidisciplinary potential of the algorithm by also applying it to network motif discovery. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Distributed Memory (DM). <s> Many natural structures can be naturally represented by complex networks. Discovering network motifs, which are overrepresented patterns of inter-connections, is a computationally hard task related to graph isomorphism. Sequential methods are hindered by an exponential execution time growth when we increase the size of motifs and networks. In this article we study the opportunities for parallelism in existing methods and propose new parallel strategies that adapt and extend one of the most efficient serial methods known from the Fanmod tool. We propose both a master-worker strategy and one with distributed control, in which we employ a randomized receiver initiated methodology capable of providing dynamic load balancing during the whole computation process. Our strategies are capable of dealing both with exact and approximate network motif discovery. We implement and apply our algorithms to a set of representative networks and examine their scalability up to 128 processing cores. We obtain almost linear speedups, showcasing the efficiency of our proposed approach and are able to reach motif sizes that were not previously achievable using conventional serial algorithms. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Distributed Memory (DM). <s> We present a novel distributed algorithm for counting all four-node induced subgraphs in a big graph. These counts, called the $4$-profile, describe a graph's connectivity properties and have found several uses ranging from bioinformatics to spam detection. We also study the more complicated problem of estimating the local $4$-profiles centered at each vertex of the graph. The local $4$-profile embeds every vertex in an $11$-dimensional space that characterizes the local geometry of its neighborhood: vertices that connect different clusters will have different local $4$-profiles compared to those that are only part of one dense cluster. ::: Our algorithm is a local, distributed message-passing scheme on the graph and computes all the local $4$-profiles in parallel. We rely on two novel theoretical contributions: we show that local $4$-profiles can be calculated using compressed two-hop information and also establish novel concentration results that show that graphs can be substantially sparsified and still retain good approximation quality for the global $4$-profile. ::: We empirically evaluate our algorithm using a distributed GraphLab implementation that we scaled up to $640$ cores. We show that our algorithm can compute global and local $4$-profiles of graphs with millions of edges in a few minutes, significantly improving upon the previous state of the art. <s> BIB003
A parallel cluster offers the opportunity to use multiple (heterogenous) machines to speedup computation. Clusters can have hundreds of processors and therefore, if speedup is linear, computation time is reduced from weeks to just a few hours. For work sharing to be efficiently performed on DM architectures one can either have a master-node mediating work sharing or have workers directly steal work from each other BIB001 BIB002 . Usually DM approaches are implemented directly using MPI [151-153, 164, 190] but higher level software, such as GraphLab, can also be used BIB003 . DM has the drawback of workers having to send messages through the network, making network bandwidth a bottleneck.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Shared Memory (SM) <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Shared Memory (SM) <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Shared Memory (SM) <s> From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Shared Memory (SM) <s> Enumerating all subgraphs of an input graph is an important task for analyzing complex networks. Valuable information can be extracted about the characteristics of the input graph using all-subgraph enumeration. Not withstanding, the number of subgraphs grows exponentially with growth of the input graph or by increasing the size of the subgraphs to be enumerated. Hence, all-subgraph enumeration is very time consuming when the size of the subgraphs or the input graph is big. We propose a parallel solution named Subenum which in contrast to available solutions can perform much faster. Subenum enumerates subgraphs using edges instead of vertices, and this approach leads to a parallel and load-balanced enumeration algorithm that can have efficient execution on current multicore and multiprocessor machines. Also, Subenum uses a fast heuristic which can effectively accelerate nonisomorphism subgraph enumeration. Subenum can efficiently use external memory, and unlike other subgraph enumeration methods, it is not associated with the main memory limits of the used machine. Hence, Subenum can handle large input graphs and subgraph sizes that other solutions cannot handle. Several experiments are done using real-world input graphs. Compared to the available solutions, Subenum can enumerate subgraphs several orders of magnitude faster and the experimental results show that the performance of Subenum scales almost linearly by using additional processor cores. <s> BIB004
. SM approaches have the advantage in their underlying hardware being a commodity (multicore computers). Furthermore, workers in a SM environment do not communicate via network messages (since they can communicate directly in main memory), thus avoiding a bottleneck in the network bandwidth. However, the number of cores is usually very low when compared to DM, MapReduce, and GPU architectures. Algorithms on multicores tend to traverse the search space in a DFS fashion BIB003 BIB001 BIB002 BIB004 thus avoiding the storage of large number of subgraph occurrences in disk or main memory.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.2.3 <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.2.3 <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.2.3 <s> Networks are powerful in representing a wide variety of systems in many fields of study. Networks are composed of smaller substructures (subgraphs) that characterize them and give important information related to their topology and functionality. Therefore, discovering and counting these subgraph patterns is very important towards mining the features of networks. Algorithmically, subgraph counting in a network is a computationally hard problem and the needed execution time grows exponentially as the size of the subgraph or the network increases. The main goal of this paper is to contribute towards subgraph search, by providing an accessible and scalable parallel methodology for counting subgraphs. For that we present a dynamic iterative MapReduce strategy to parallelize algorithms that induce an unbalanced search tree, and apply it in the subgraph counting realm. At the core of our methods lies the g-trie, a state-of-the-art data structure that was created precisely for this task. Our strategy employs an adaptive time threshold and an efficient work-sharing mechanism to dynamically do load balancing between the workers. We evaluate our implementations using Spark on a large set of representative complex networks from different fields. The results obtained are very promising and we achieved a consistent and almost linear speedup up to 32 cores, with an average efficiency close to 80+. To the best of our knowledge this is the fastest and most scalable method for subgraph counting within the MapReduce programming model. <s> BIB003
MapReduce. The MapReduce paradigm has been successfully applied to problems where each worker executes very similar tasks, which is the case of subgraph counting. MapReduce is an inherently BFS method, whereas most subgraph counting algorithms are DFS-based. The biggest setback of using MapReduce is the huge amount of subgraph occurrences that are stored in files between each BFS-level iteration (corresponding to a node expansion) BIB001 BIB002 . To avoid this setback, one can instead store them in RAM when the number of occurrences fits in memory BIB003 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a "bag," in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores. Since PBFS employs a nonconstant-time "reducer" -- "hyperobject" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> Monte Carlo simulation is ideally suited for solving Boltzmann neutron transport equation in inhomogeneous media. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop system. The interest in adopting GPUs for Monte Carlo acceleration is rapidly mounting, fueled partially by the parallelism afforded by the latest GPU technologies and the challenge to perform full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem and an eigenvalue/criticality problem were developed for CPU and GPU environments, respectively, to evaluate issues associated with computational speedup afforded by the use of GPUs. The results suggest that a speedup factor of 30 in Monte Carlo radiation transport of neutrons is within reach using the state-of-the-art GPU technologies. However, for the eigenvalue/criticality problem, the speedup was 8.5. In comparison, for a task of voxelizing unstructured mesh geometry that is more parallel in nature, the speedup of 45 was obtained. It was observed that, to date, most attempts to adopt GPUs for Monte Carlo acceleration were based on naive implementations and have not yielded the level of anticipated gains. Successful implementation of Monte Carlo schemes for GPUs will likely require the development of an entirely new code. Given the prediction that future-generation GPU products will likely bring exponentially improved computing power and performances, innovative hardware and software solutions may make it possible to achieve full-core Monte Carlo calculation within one hour using a desktop computer system in a few years. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> Breadth-first search (BFS) is a core primitive for graph traversal and a basis for many higher-level graph analysis algorithms. It is also representative of a class of parallel computations whose memory accesses and work distribution are both irregular and data-dependent. Recent work has demonstrated the plausibility of GPU sparse graph traversal, but has tended to focus on asymptotically inefficient algorithms that perform poorly on graphs with non-trivial diameter. We present a BFS parallelization focused on fine-grained task management constructed from efficient prefix sum that achieves an asymptotically optimal O(|V|+|E|) work complexity. Our implementation delivers excellent performance on diverse graphs, achieving traversal rates in excess of 3.3 billion and 8.3 billion traversed edges per second using single and quad-GPU configurations, respectively. This level of performance is several times faster than state-of-the-art implementations both CPU and GPU platforms. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> The identification of network motifs has important applications in numerous domains, such as pattern detection in biological networks and graph analysis in digital circuits. However, mining network motifs is computationally challenging, as it requires enumerating subgraphs from a real-life graph, and computing the frequency of each subgraph in a large number of random graphs. In particular, existing solutions often require days to derive network motifs from biological networks with only a few thousand vertices. To address this problem, this paper presents a novel study on network motif discovery using Graphical Processing Units (GPUs). The basic idea is to employ GPUs to parallelize a large number of subgraph matching tasks in computing subgraph frequencies from random graphs, so as to reduce the overall computation time of network motif discovery. We explore the design space of GPU-based subgraph matching algorithms, with careful analysis of several crucial factors that affect the performance of GPU programs. Based on our analysis, we develop a GPU-based solution that (i) considerably differs from existing CPU-based methods, and (ii) exploits the strengths of GPUs in terms of parallelism while mitigating their limitations in terms of the computation power per GPU core. With extensive experiments on a variety of biological networks, we show that our solution is up to two orders of magnitude faster than the best CPU-based approach, and is around 20 times more cost-effective than the latter, when taking into account the monetary costs of the CPU and GPUs used. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> GPU. <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB005
GPUs are very appealing due to their large amount of parallel threads. Despite linear speedups being rare in the GPU, since they have such a large number of cores the gains can still be substantial. However, they are not well-suited for graph traversal algorithms. One of current best pure BFS algorithms BIB003 on the GPU only achieve a speedup of ≈ 8x (on a 448-core NVIDIA C2050) when compared to a 4-core CPU BFS algorithm BIB001 . By contrast, Monte Carlo calculations on a NVIDIA C2050 GPU achieve a speedup of ≈ 30x BIB002 when compared to a 4-core CPU implementation. This is mainly due to branching problems, uncoalesced memory accesses and coarse work-unit granularity, sometimes leading to almost non-existent speedups in subgraph counting . Using additional memory to efficiently store neighbors and smart work division help achieve some speedup BIB004 . Another approach is to combine CPUs and GPUs: CPUs handle unbalanced computation while GPUs execute regular computation BIB005 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Network motifs have been demonstrated to be the building blocks in many biological networks such as transcriptional regulatory networks. Finding network motifs plays a key role in understanding system level functions and design principles of molecular interactions. In this paper, we present a novel definition of the neighborhood of a node. Based on this concept, we formally define and present an effective algorithm for finding network motifs. The method seeks a neighborhood assignment for each node such that the induced neighborhoods are partitioned with no overlap. We then present a parallel algorithm to find network motifs using a parallel cluster. The algorithm is applied on an E. coli transcriptional regulatory network to find motifs with size up to six. Compared with previous algorithms, our algorithm performs better in terms of running time and precision. Based on the motifs that are found in the network, we further analyze the topology and coverage of the motifs. The results suggest that a small number of key motifs can form the motifs of a bigger size. Also, some motifs exhibit a correlation with complex functions. This study presents a framework for detecting the most significant recurring subgraph patterns in transcriptional regulatory networks. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Motifs in a network are small connected subnetworks that occur in significantly higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Kashtan et al. [Bioinformatics, 2004] proposed a sampling algorithm for efficiently performing the computationally challenging task of detecting network motifs. However, among other drawbacks, this algorithm suffers from sampling bias and is only efficient when the motifs are small (3 or 4 nodes). Based on a detailed analysis of the previous algorithm, we present a new algorithm for network motif detection which overcomes these drawbacks. Experiments on a testbed of biological networks show our algorithm to be orders of magnitude faster than previous approaches. This allows for the detection of larger motifs in bigger networks than was previously possible, facilitating deeper insight into the field. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> BackgroundComplex networks are studied across many fields of science and are particularly important to understand biological processes. Motifs in networks are small connected sub-graphs that occur significantly in higher frequencies than in random networks. They have recently gathered much attention as a useful concept to uncover structural design principles of complex networks. Existing algorithms for finding network motifs are extremely costly in CPU time and memory consumption and have practically restrictions on the size of motifs.ResultsWe present a new algorithm (Kavosh), for finding k-size network motifs with less memory and CPU time in comparison to other existing algorithms. Our algorithm is based on counting all k-size sub-graphs of a given graph (directed or undirected). We evaluated our algorithm on biological networks of E. coli and S. cereviciae, and also on non-biological networks: a social and an electronic network.ConclusionThe efficiency of our algorithm is demonstrated by comparing the obtained results with three well-known motif finding tools. For comparison, the CPU time, memory usage and the similarities of obtained motifs are considered. Besides, Kavosh can be employed for finding motifs of size greater than eight, while most of the other algorithms have restriction on motifs with size greater than eight. The Kavosh source code and help files are freely available at: http://Lbb.ut.ac.ir/Download/LBBsoft/Kavosh/. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Many natural and artificial structures can be represented as complex networks. Computing the frequency of all subgraphs of a certain size can give a very comprehensive structural characterization of these networks. This is known as the subgraph census problem, and it is also important as an intermediate step in the computation of other features of the network, such as network motifs. The subgraph census problem is computationally hard and most associated algorithms for it are sequential. Here we present several increasingly efficient parallel strategies for, culminating in a scalable and adaptive parallel algorithm. We applied our strategies to a representative set of biological networks and achieved almost linear speedups up to 128 processors, paving the way for making it possible to compute the census for bigger networks and larger subgraph sizes. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Many natural structures can be naturally represented by complex networks. Discovering network motifs, which are overrepresented patterns of inter-connections, is a computationally hard task related to graph isomorphism. Sequential methods are hindered by an exponential execution time growth when we increase the size of motifs and networks. In this article we study the opportunities for parallelism in existing methods and propose new parallel strategies that adapt and extend one of the most efficient serial methods known from the Fanmod tool. We propose both a master-worker strategy and one with distributed control, in which we employ a randomized receiver initiated methodology capable of providing dynamic load balancing during the whole computation process. Our strategies are capable of dealing both with exact and approximate network motif discovery. We implement and apply our algorithms to a set of representative networks and examine their scalability up to 128 processing cores. We obtain almost linear speedups, showcasing the efficiency of our proposed approach and are able to reach motif sizes that were not previously achievable using conventional serial algorithms. <s> BIB005 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> The ability to find and count subgraphs of a given network is an important non trivial task with multidisciplinary applicability. Discovering network motifs or computing graphlet signatures are two examples of methodologies that at their core rely precisely on the subgraph counting problem. Here we present the g-trie, a data-structure specifically designed for discovering subgraph frequencies. We produce a tree that encapsulates the structure of the entire graph set, taking advantage of common topologies in the same way a prefix tree takes advantage of common prefixes. This avoids redundancy in the representation of the graphs, thus allowing for both memory and computation time savings. We introduce a specialized canonical labeling designed to highlight common substructures and annotate the g-trie with a set of conditional rules that break symmetries, avoiding repetitions in the computation. We introduce a novel algorithm that takes as input a set of small graphs and is able to efficiently find and count them as induced subgraphs of a larger network. We perform an extensive empirical evaluation of our algorithms, focusing on efficiency and scalability on a set of diversified complex networks. Results show that g-tries are able to clearly outperform previously existing algorithms by at least one order of magnitude. <s> BIB006 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB007 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Vertices. <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB008
One possibility is to consider each vertex v ∈ V (G) as a work-unit and split them among workers. A worker p then computes all size-k subgraph occurrences that contain vertex v. Naive approaches have different workers finding repeated occurrences that need to be removed BIB001 , but efficient sequential algorithms have canonical representations that eliminate this problem BIB003 BIB006 BIB002 , making each work-unit independent. Using vertices as work-units has the drawback of creating very coarse work-units: different vertices induce search spaces with very different computational costs. For instance, counting all the subgraph occurrences that start (or eventually reach) a hub-like node is much more time-consuming than counting occurrences of a nearly isolated node. For algorithms with vertices as work-units to be efficient they can either try to find a good initial division BIB001 or enable work sharing between workers BIB007 BIB008 BIB004 BIB005 . Each of these work division strategies is discussed in Section 5.5.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> From social science to biology, numerous applications often rely on graphlets for intuitive and meaningful characterization of networks at both the global macro-level as well as the local micro-level. While graphlets have witnessed a tremendous success and impact in a variety of domains, there has yet to be a fast and efficient approach for computing the frequencies of these subgraph patterns. However, existing methods are not scalable to large networks with millions of nodes and edges, which impedes the application of graphlets to new problems that require large-scale network analysis. To address these problems, we propose a fast, efficient, and parallel algorithm for counting graphlets of size k={3,4}-nodes that take only a fraction of the time to compute when compared with the current methods used. The proposed graphlet counting algorithms leverages a number of proven combinatorial arguments for different graphlets. For each edge, we count a few graphlets, and with these counts along with the combinatorial arguments, we obtain the exact counts of others in constant time. On a large collection of 300+ networks from a variety of domains, our graphlet counting strategies are on average 460x faster than current methods. This brings new opportunities to investigate the use of graphlets on much larger networks and newer applications as we show in the experiments. To the best of our knowledge, this paper provides the largest graphlet computations to date as well as the largest systematic investigation on over 300+ networks from a variety of domains. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> Enumerating all subgraphs of an input graph is an important task for analyzing complex networks. Valuable information can be extracted about the characteristics of the input graph using all-subgraph enumeration. Not withstanding, the number of subgraphs grows exponentially with growth of the input graph or by increasing the size of the subgraphs to be enumerated. Hence, all-subgraph enumeration is very time consuming when the size of the subgraphs or the input graph is big. We propose a parallel solution named Subenum which in contrast to available solutions can perform much faster. Subenum enumerates subgraphs using edges instead of vertices, and this approach leads to a parallel and load-balanced enumeration algorithm that can have efficient execution on current multicore and multiprocessor machines. Also, Subenum uses a fast heuristic which can effectively accelerate nonisomorphism subgraph enumeration. Subenum can efficiently use external memory, and unlike other subgraph enumeration methods, it is not associated with the main memory limits of the used machine. Hence, Subenum can handle large input graphs and subgraph sizes that other solutions cannot handle. Several experiments are done using real-world input graphs. Compared to the available solutions, Subenum can enumerate subgraphs several orders of magnitude faster and the experimental results show that the performance of Subenum scales almost linearly by using additional processor cores. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Edges. <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB005
Due to the unbalanced search tree induced by vertex division, some algorithms use edges as work-units BIB002 BIB001 BIB003 BIB004 . The idea is similar to vertice division: distribute all e(v i , v j ) ∈ E(G) evenly among the workers. An initial edge division guarantees that all workers have an equal amount of 2-node subgraphs, which is not true for vertex division. However, for k ≥ 3 this strategy offers no guarantees in terms of workload balancing. Therefore, in regular networks (i.e. networks where all nodes have similar clustering coefficients) this strategy achieves a good speedup, but it is not scalable in general. Some methods BIB005 perform dynamic first-fit division (discussed in Section 5.5.2) instead the simple static division described.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraphs. <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraphs. <s> The identification of network motifs has important applications in numerous domains, such as pattern detection in biological networks and graph analysis in digital circuits. However, mining network motifs is computationally challenging, as it requires enumerating subgraphs from a real-life graph, and computing the frequency of each subgraph in a large number of random graphs. In particular, existing solutions often require days to derive network motifs from biological networks with only a few thousand vertices. To address this problem, this paper presents a novel study on network motif discovery using Graphical Processing Units (GPUs). The basic idea is to employ GPUs to parallelize a large number of subgraph matching tasks in computing subgraph frequencies from random graphs, so as to reduce the overall computation time of network motif discovery. We explore the design space of GPU-based subgraph matching algorithms, with careful analysis of several crucial factors that affect the performance of GPU programs. Based on our analysis, we develop a GPU-based solution that (i) considerably differs from existing CPU-based methods, and (ii) exploits the strengths of GPUs in terms of parallelism while mitigating their limitations in terms of the computation power per GPU core. With extensive experiments on a variety of biological networks, we show that our solution is up to two orders of magnitude faster than the best CPU-based approach, and is around 20 times more cost-effective than the latter, when taking into account the monetary costs of the CPU and GPUs used. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraphs. <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraphs. <s> We present a novel distributed algorithm for counting all four-node induced subgraphs in a big graph. These counts, called the $4$-profile, describe a graph's connectivity properties and have found several uses ranging from bioinformatics to spam detection. We also study the more complicated problem of estimating the local $4$-profiles centered at each vertex of the graph. The local $4$-profile embeds every vertex in an $11$-dimensional space that characterizes the local geometry of its neighborhood: vertices that connect different clusters will have different local $4$-profiles compared to those that are only part of one dense cluster. ::: Our algorithm is a local, distributed message-passing scheme on the graph and computes all the local $4$-profiles in parallel. We rely on two novel theoretical contributions: we show that local $4$-profiles can be calculated using compressed two-hop information and also establish novel concentration results that show that graphs can be substantially sparsified and still retain good approximation quality for the global $4$-profile. ::: We empirically evaluate our algorithm using a distributed GraphLab implementation that we scaled up to $640$ cores. We show that our algorithm can compute global and local $4$-profiles of graphs with millions of edges in a few minutes, significantly improving upon the previous state of the art. <s> BIB004
At the start of computation, only vertices and edges from the network are known. As the k-subgraph counting process proceeds, subgraphs of sizes k − i, i < k are found. Thus, the work-units divided among threads can be these intermediate states (incomplete subgraphs). Some BFS-based algorithms BIB002 BIB001 BIB003 begin with either edges or vertices as initial workunits and, at the end of each BFS-level, intermediate subgraphs are found and divided among workers. DFS-based methods expand each subgraph work-unit by one node until they reach a k-subgraph BIB004 BIB003 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph-trees. <s> Finding and counting the occurrences of a collection of subgraphs within another larger network is a computationally hard problem, closely related to graph isomorphism. The subgraph count is by itself a very powerful characterization of a network and it is crucial for other important network measurements. G-tries are a specialized data-structure designed to store and search for subgraphs. By taking advantage of subgraph common substructure, g-tries can provide considerable speedups over previously used methods. In this paper we present a parallel algorithm based precisely on g-tries that is able to efficiently find and count subgraphs. The algorithm relies on randomized receiver-initiated dynamic load balancing and is able to stop its computation at any given time, efficiently store its search position, divide what is left to compute in two halfs, and resume from where it left. We apply our algorithm to several representative real complex networks from various domains and examine its scalability. We obtain an almost linear speedup up to 128 processors, thus allowing us to reach previously unfeasible limits. We showcase the multidisciplinary potential of the algorithm by also applying it to network motif discovery. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph-trees. <s> Many natural and artificial structures can be represented as complex networks. Computing the frequency of all subgraphs of a certain size can give a very comprehensive structural characterization of these networks. This is known as the subgraph census problem, and it is also important as an intermediate step in the computation of other features of the network, such as network motifs. The subgraph census problem is computationally hard and most associated algorithms for it are sequential. Here we present several increasingly efficient parallel strategies for, culminating in a scalable and adaptive parallel algorithm. We applied our strategies to a representative set of biological networks and achieved almost linear speedups up to 128 processors, paving the way for making it possible to compute the census for bigger networks and larger subgraph sizes. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph-trees. <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Subgraph-trees. <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB004
This approach is applicable only for DFS-like algorithms where, since the search tree is explored in a depth-first fashion, a work-tree is implicitly built during enumeration: when the algorithm is at level k of the search, unexplored candidates of stages {k − 1, k − 2, ..., 1} were previously generated. Then, instead of splitting top vertices from stage 1 only (as described in Section 5.3.1), the search-tree is split among sharing processors BIB003 BIB004 BIB001 BIB002 (more details on this in Section 5.5.3). Subgraph-trees are expected to be similar since both coarse-and fine-grained work-units are generated. Nevertheless, it is not guaranteed that work-units from the same level of the search tree induce similar work. This strategy also incurs the additional complexity of building the candidate-set of each level and splitting them among workers.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> The identification of network motifs has important applications in numerous domains, such as pattern detection in biological networks and graph analysis in digital circuits. However, mining network motifs is computationally challenging, as it requires enumerating subgraphs from a real-life graph, and computing the frequency of each subgraph in a large number of random graphs. In particular, existing solutions often require days to derive network motifs from biological networks with only a few thousand vertices. To address this problem, this paper presents a novel study on network motif discovery using Graphical Processing Units (GPUs). The basic idea is to employ GPUs to parallelize a large number of subgraph matching tasks in computing subgraph frequencies from random graphs, so as to reduce the overall computation time of network motif discovery. We explore the design space of GPU-based subgraph matching algorithms, with careful analysis of several crucial factors that affect the performance of GPU programs. Based on our analysis, we develop a GPU-based solution that (i) considerably differs from existing CPU-based methods, and (ii) exploits the strengths of GPUs in terms of parallelism while mitigating their limitations in terms of the computation power per GPU core. With extensive experiments on a variety of biological networks, we show that our solution is up to two orders of magnitude faster than the best CPU-based approach, and is around 20 times more cost-effective than the latter, when taking into account the monetary costs of the CPU and GPUs used. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Breadth-First Search. <s> Networks are powerful in representing a wide variety of systems in many fields of study. Networks are composed of smaller substructures (subgraphs) that characterize them and give important information related to their topology and functionality. Therefore, discovering and counting these subgraph patterns is very important towards mining the features of networks. Algorithmically, subgraph counting in a network is a computationally hard problem and the needed execution time grows exponentially as the size of the subgraph or the network increases. The main goal of this paper is to contribute towards subgraph search, by providing an accessible and scalable parallel methodology for counting subgraphs. For that we present a dynamic iterative MapReduce strategy to parallelize algorithms that induce an unbalanced search tree, and apply it in the subgraph counting realm. At the core of our methods lies the g-trie, a state-of-the-art data structure that was created precisely for this task. Our strategy employs an adaptive time threshold and an efficient work-sharing mechanism to dynamically do load balancing between the workers. We evaluate our implementations using Spark on a large set of representative complex networks from different fields. The results obtained are very promising and we achieved a consistent and almost linear speedup up to 32 cores, with an average efficiency close to 80+. To the best of our knowledge this is the fastest and most scalable method for subgraph counting within the MapReduce programming model. <s> BIB005
Algorithms that adopt this strategy are typically MapReduce methods BIB001 BIB005 BIB002 or GPU BIB003 BIB004 approaches. MapReduce works intrinsically in BFS fashion, and GPUs are very inefficient when work is unbalanced and contains branching code. BFS starts by (i) splitting edges among workers, (ii) the processors compute the patterns of size-3 from each edge (size-2 subgraphs), (iii) the patterns of size-3 are themselves split among processors and (iv) this process is repeated until the desired size-k patterns are obtained. The idea of BFS is to give large amounts of fine-grained work-units to each worker, thus making work division more balanced since these work-units induce similar work, making this approach more suitable for methods that require regular data. However, the main drawback is that these algorithms need to store partial results (which grow exponentially as k increases) and synchronize at the end of each BFS-level.
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Network motifs have been demonstrated to be the building blocks in many biological networks such as transcriptional regulatory networks. Finding network motifs plays a key role in understanding system level functions and design principles of molecular interactions. In this paper, we present a novel definition of the neighborhood of a node. Based on this concept, we formally define and present an effective algorithm for finding network motifs. The method seeks a neighborhood assignment for each node such that the induced neighborhoods are partitioned with no overlap. We then present a parallel algorithm to find network motifs using a parallel cluster. The algorithm is applied on an E. coli transcriptional regulatory network to find motifs with size up to six. Compared with previous algorithms, our algorithm performs better in terms of running time and precision. Based on the motifs that are found in the network, we further analyze the topology and coverage of the motifs. The results suggest that a small number of key motifs can form the motifs of a bigger size. Also, some motifs exhibit a correlation with complex functions. This study presents a framework for detecting the most significant recurring subgraph patterns in transcriptional regulatory networks. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Network motifs are basic building blocks in complex networks. Motif detection has recently attracted much attention as a topic to uncover structural design principles of complex networks. Pattern finding is the most computationally expensive step in the process of motif detection. In this paper, we design a pattern finding algorithm based on Google MapReduce to improve the efficiency. Performance evaluation shows our algorithm can facilitates the detection of larger motifs in large size networks and has good scalability. We apply it in the prescription network and find some commonly used prescription network motifs that provide the possibility to further discover the law of prescription compatibility. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Discovery of frequent subgraphs of a network is a challenging and time-consuming process. Several heuristics and improvements have been proposed before. However, when the size of subgraphs or the size of network is big, the process cannot be done in feasible time on a single machine. One of the promising solutions is using the processing power of available parallel and distributed systems. In this paper, we present a distributed solution for discovery of frequent subgraphs of a network using the MapReduce framework. The solution is named MRSUB and is developed to run over the Hadoop framework. MRSUB uses a novel and load-balanced parallel subgraph enumeration algorithm and fits it into the MapReduce framework. Also, a fast subgraph isomorphism detection heuristic is used which accelerates the whole process further. We executed MRSUB on a private cloud infrastructure with 40 machines and performed several experiments with different networks. Experimental results show that MRSUB scales well and offers an effective solution for discovery of frequent subgraphs of networks which are not possible on a single machine in feasible time. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> 5.5.1 <s> Graphlets are induced subgraphs of a large network and are important for understanding and modeling complex networks. Despite their practical importance, graphlets have been severely limited to applications and domains with relatively small graphs. Most previous work has focused on exact algorithms, however, it is often too expensive to compute graphlets exactly in massive networks with billions of edges, and finding an approximate count is usually sufficient for many applications. In this work, we propose an unbiased graphlet estimation framework that is (a) fast with significant speedups compared to the state-of-the-art, (b) parallel with nearly linear-speedups, (c) accurate with <1% relative error, (d) scalable and space-efficient for massive networks with billions of edges, and (e) flexible for a variety of real-world settings, as well as estimating macro and micro-level graphlet statistics (e.g., counts) of both connected and disconnected graphlets. In addition, an adaptive approach is introduced that finds the smallest sample size required to obtain estimates within a given user-defined error bound. On 300 networks from 20 domains, we obtain <1% relative error for all graphlets. This is significantly more accurate than existing methods while using less data. Moreover, it takes a few seconds on billion edge graphs (as opposed to days/weeks). These are by far the largest graphlet computations to date. <s> BIB005
Static. The simplest form of work division is to produce an initial distribution of work-units and proceed with the parallel computation, without ever spending time dividing work during runtime. Trying to obtain an estimation of the work beforehand BIB004 BIB001 is valuable but limited: if the estimation is done quickly but is not very precise (such as using node-degrees or clustering coefficients to estimate work-unit difficulty) little guarantees are offered that the work division is balanced, and obtaining a very precise estimation is as computationally expensive as doing subgraph enumeration itself. Following a BFS approach BIB002 BIB003 helps balancing out the work-units and a static work division at each BFS-level is usually sufficient to obtain good results. However, those strategies have limitations as discussed in Section 5.4.1. Some analytic works, which do not rely on explicit subgraph enumeration, do not need advanced work division strategies because their algorithm is almost embarrassingly parallel BIB005 .
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Diagonal Work <s> Finding and counting the occurrences of a collection of subgraphs within another larger network is a computationally hard problem, closely related to graph isomorphism. The subgraph count is by itself a very powerful characterization of a network and it is crucial for other important network measurements. G-tries are a specialized data-structure designed to store and search for subgraphs. By taking advantage of subgraph common substructure, g-tries can provide considerable speedups over previously used methods. In this paper we present a parallel algorithm based precisely on g-tries that is able to efficiently find and count subgraphs. The algorithm relies on randomized receiver-initiated dynamic load balancing and is able to stop its computation at any given time, efficiently store its search position, divide what is left to compute in two halfs, and resume from where it left. We apply our algorithm to several representative real complex networks from various domains and examine its scalability. We obtain an almost linear speedup up to 128 processors, thus allowing us to reach previously unfeasible limits. We showcase the multidisciplinary potential of the algorithm by also applying it to network motif discovery. <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Diagonal Work <s> Many natural and artificial structures can be represented as complex networks. Computing the frequency of all subgraphs of a certain size can give a very comprehensive structural characterization of these networks. This is known as the subgraph census problem, and it is also important as an intermediate step in the computation of other features of the network, such as network motifs. The subgraph census problem is computationally hard and most associated algorithms for it are sequential. Here we present several increasingly efficient parallel strategies for, culminating in a scalable and adaptive parallel algorithm. We applied our strategies to a representative set of biological networks and achieved almost linear speedups up to 128 processors, paving the way for making it possible to compute the census for bigger networks and larger subgraph sizes. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Diagonal Work <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Dynamic: Diagonal Work <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB004
Splitting. Algorithms that employ this strategy BIB003 BIB004 BIB001 BIB002 perform an initial static work division. They do not need a sophisticated criteria to choose to whom work-units are assigned because work will be dynamically redistributed during runtime: whenever workers are idle, some work will be relocated from busy workers to them. Furthermore, instead of simply giving half of their top-level work-units away and keeping the other half, a busy worker fully splits its work tree The main idea is to build work-units of both fine-and coarse-grained sizes, and this is particularly helpful in cases where a worker becomes stuck managing a very complex initial work-unit; this way, that work-unit is split in half, and it can be split iteratively to other workers if needed. These work-units can then either be stored in a global work queue, which a master worker is responsible of managing BIB001 BIB002 , or sharing is conducted between worker threads themselves BIB003 BIB004 (more details on Sections 5.6.1 and 5.6.2, respectively).
A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Dynamic load balancing is crucial for the performance of many parallel algorithms. Random polling, a simple randomized load balancing algorithm, has proved to be very efficient in practice for applications like parallel depth first search. This paper presents a detailed analysis of the algorithm taking into account many aspects of the underlying machine and the application to be load balanced. It derives tight scalability bounds which are for the first time able to explain the superior performance of random polling analytically. In some cases, the algorithm even turns out to be optimal. Some of the proof-techniques employed might also be useful for the analysis of other parallel algorithms. > <s> BIB001 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Counting the occurrences of small subgraphs in large networks is a fundamental graph mining metric with several possible applications. Computing frequencies of those subgraphs is also known as the subgraph census problem, which is a computationally hard task. In this paper we provide a parallel multicore algorithm for this purpose. At its core we use FaSE, an efficient network-centric sequential subgraph census algorithm, which is able to substantially decrease the number of isomorphism tests needed when compared to past approaches. We use one thread per core and employ a dynamic load balancing scheme capable of dealing with the highly unbalanced search tree induced by FaSE and effectively redistributing work during execution. We assessed the scalability of our algorithm on a varied set of representative networks and achieved near linear speedup up to 32 cores while obtaining a high efficiency for the total 64 cores of our machine. <s> BIB002 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Computing the frequency of small subgraphs on a large network is a computationally hard task. This is, however, an important graph mining primitive, with several applications, and here we present a novel multicore parallel algorithm for this task. At the core of our methodology lies a state-of-the-art data structure, the g-trie, which represents a collection of subgraphs and allows for a very efficient sequential search. Our implementation was done using Pthreads and can run on any multicore personal computer. We employ a diagonal work sharing strategy to dynamically and effectively divide work among threads during the execution. We assess the performance of our Pthreads implementation on a set of representative networks from various domains and with diverse topological features. For most networks, we obtain a speedup of over 50 for 64 cores and an almost linear speedup up to 32 cores, showcasing the flexibility and scalability of our algorithm. This paves the way for the usage of such counting algorithms on larger subgraph and network sizes without the obligatory access to a cluster. <s> BIB003 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Enumerating all subgraphs of an input graph is an important task for analyzing complex networks. Valuable information can be extracted about the characteristics of the input graph using all-subgraph enumeration. Not withstanding, the number of subgraphs grows exponentially with growth of the input graph or by increasing the size of the subgraphs to be enumerated. Hence, all-subgraph enumeration is very time consuming when the size of the subgraphs or the input graph is big. We propose a parallel solution named Subenum which in contrast to available solutions can perform much faster. Subenum enumerates subgraphs using edges instead of vertices, and this approach leads to a parallel and load-balanced enumeration algorithm that can have efficient execution on current multicore and multiprocessor machines. Also, Subenum uses a fast heuristic which can effectively accelerate nonisomorphism subgraph enumeration. Subenum can efficiently use external memory, and unlike other subgraph enumeration methods, it is not associated with the main memory limits of the used machine. Hence, Subenum can handle large input graphs and subgraph sizes that other solutions cannot handle. Several experiments are done using real-world input graphs. Compared to the available solutions, Subenum can enumerate subgraphs several orders of magnitude faster and the experimental results show that the performance of Subenum scales almost linearly by using additional processor cores. <s> BIB004 </s> A Survey on Subgraph Counting: Concepts, Algorithms and Applications to Network Motifs and Graphlets <s> Work Sharing <s> Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multi-core CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics. <s> BIB005
Since work is unbalanced for enumeration algorithms, work sharing can be used in order to balance work during runtime. 5.6.1 Master-Worker (M-W). This type of work sharing is mostly adopted in distributed memory (DM) environments since workers do not share positions of memory that they can easily access and use to communicate. A master worker initially splits the work-units among the workers (slaves) and then manages load balancing. Load balancing can be achieved by managing a global queue where slaves put some of their work, to be later redistributed by the master . This strategy implies that the master is not being used the enumeration and that there is a need communication over the network. 5.6.2 Worker-Worker (W-W). Shared memory (SM) environments allow for direct communication between workers, therefore a master node is redundant. In this strategy, an idle worker asks a random worker for work BIB002 BIB003 . One could try to estimate which worker should be polled for work (which is computationally costly) but random polling has been established as an efficient heuristic for dynamic load balancing BIB001 . After the sharing process, computation resumes with each worker evolved in the exchange computing their part of the work. Computation ends when all workers are polling for work. This strategy achieves a balanced work-division during runtime, and the penalty caused by worker communication is negligible BIB002 BIB003 . Most implementations of W-W sharing are built on top of relatively homogeneous systems, such as multiworkered CPUs BIB004 or clusters of similar processors . In these systems, since all workers are equivalent, it is irrelevant which ones get a specific easy (or hard) work-unit, thus only load balancing needs to be controlled. Strategies that combine CPUs with GPUs, for instance, can split tasks in a way that takes advantage of both architectures: GPUs are very fast for regular tasks while CPUs can deal with irregular ones. For instance, a shared deque can be kept where workers, either GPUs or CPUs, put work on or take work from BIB005 ; the queue is ordered by complexity: complex tasks are placed at the front, and simple tasks at the end. The main idea is that CPUs handle just a few complex work-units from the front of the deque while GPUs take large bundles of work-units from the back.
A survey of commercial frameworks for the Internet of Things <s> I. INTRODUCTION <s> This paper addresses the Internet of Things. Main enabling factor of this promising paradigm is the integration of several technologies and communications solutions. Identification and tracking technologies, wired and wireless sensor and actuator networks, enhanced communication protocols (shared with the Next Generation Internet), and distributed intelligence for smart objects are just the most relevant. As one can easily imagine, any serious contribution to the advance of the Internet of Things must necessarily be the result of synergetic activities conducted in different fields of knowledge, such as telecommunications, informatics, electronics and social science. In such a complex scenario, this survey is directed to those who want to approach this complex discipline and contribute to its development. Different visions of this Internet of Things paradigm are reported and enabling technologies reviewed. What emerges is that still major issues shall be faced by the research community. The most relevant among them are addressed in details. <s> BIB001 </s> A survey of commercial frameworks for the Internet of Things <s> I. INTRODUCTION <s> Internet-of-Things (IoT) is the convergence of Internet with RFID, Sensor and smart objects. IoT can be defined as “things belonging to the Internet” to supply and access all of real-world information. Billions of devices are expected to be associated into the system and that shall require huge distribution of networks as well as the process of transforming raw data into meaningful inferences. IoT is the biggest promise of the technology today, but still lacking a novel mechanism, which can be perceived through the lenses of Internet, things and semantic vision. This paper presents a novel architecture model for IoT with the help of Semantic Fusion Model (SFM). This architecture introduces the use of Smart Semantic framework to encapsulate the processed information from sensor networks. The smart embedded system is having semantic logic and semantic value based Information to make the system an intelligent system. This paper presents a discussion on Internet oriented applications, services, visual aspect and challenges for Internet of things using RFID, 6lowpan and sensor networks. <s> BIB002
For more than a decade the Internet of Things (IoT) has boosted the development of standards based messaging protocols. Recently, encouraged by the likes of Ericsson and Cisco with estimates of 50 billion Internet connected devices by 2020 , attention has shifted from interoperability and message layer protocols towards application frameworks supporting interoperability amongst IoT product suppliers. The IoT is the interconnection of ubiquitous computing devices for the realization of value to end users BIB001 . This definition encompasses "data collection" for the betterment of understanding and "automation" of tasks for optimization of time. The IoT field has evolved within application silos with domain specific technologies, such as health care, social networks, manufacturing and home automation. To achieve a truly "interconnected network of things" the challenge is enabling the combination of heterogeneous technologies, protocols and application requirements to produce an automated and knowledge based environment for the end user. In BIB002 , Singh et al. elaborate on three main visions for the IoT: Internet Vision, Things Vision and Semantic Vision. Depending on which vision is chosen the approach taken by a framework will differ and provide a better result for those applications. As surveyed by Perera et al. in , there are many existing IoT products and applications available. These however are based on proprietary frameworks which are not available for development of customized applications. The frameworks presented in this survey are all targeted as a basis for development of IoT applications. This paper presents a survey of highly regarded commercial frameworks and platforms which are being used for Internet of Things applications. Many of the frameworks rely on high level software layers to assist in abstracting between protocols. The high level software layer provides flexibility when interconnecting between different technologies and is well suited for working in cloud environments. In some cases the frameworks look into standardizing interfaces, defining a software service bus or simply opting to choose a single network protocol and set of application protocols. This is further discussed as follows; in Section II introduces the concept of frameworks and defines three categories of frameworks used in this survey. Sections III and IV then introduces the frameworks and platforms studied, grouped by application area. In Section V a discussion of a comparative analysis of the frameworks and platforms is presented. The survey finishes with a few concluding remarks in Section VI.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Motivation <s> Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85%) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Motivation <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB002
Multiple research studies have shown that information diffuses fast over online social networks. In a pioneering study, BIB001 showed that, characteristics of diffusion of information on social microblogging platforms, like Twitter, is similar to news media. They stimulated the notion that, Twitter-like microblogging networks are hybrid in nature, combining the characteristics of social and information networks, unlike traditional social networks. In the meantime, another This term corresponds to the information content in social network messages, as found in most of the literature, such as [Galuba et al. 2010] , BIB002 and many others. Literature also treats it as a group of blog posts hyperlinking to other blog posts .
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information diffusion <s> Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85%) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information diffusion <s> Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information diffusion <s> Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network. <s> BIB003
This term captures the movement of information cascades from one participant / portion of the social network to another. Several models attempt to capture the causes and dynamics of information diffusion content (cascades) in the literature, such as BIB002 , BIB001 , BIB003 ] and many others.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social influence <s> Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize... <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social influence <s> In many online social systems, social ties between users play an important role in dictating their behavior. One of the ways this can happen is through social influence, the phenomenon that the actions of a user can induce his/her friends to behave in a similar way. In systems where social influence exists, ideas, modes of behavior, or new technologies can diffuse through the network like an epidemic. Therefore, identifying and understanding social influence is of tremendous interest from both analysis and design points of view. This is a difficult task in general, since there are factors such as homophily or unobserved confounding variables that can induce statistical correlation between the actions of friends in a social network. Distinguishing influence from these is essentially the problem of distinguishing correlation from causality, a notoriously hard statistical problem. In this paper we study this problem systematically. We define fairly general models that replicate the aforementioned sources of social correlation. We then propose two simple tests that can identify influence as a source of social correlation when the time series of user actions is available. We give a theoretical justification of one of the tests by proving that with high probability it succeeds in ruling out influence in a rather general model of social correlation. We also simulate our tests on a number of examples designed by randomly generating actions of nodes on a real social network (from Flickr) according to one of several models. Simulation results confirm that our test performs well on these data. Finally, we apply them to real tagging data on Flickr, exhibiting that while there is significant social correlation in tagging behavior on this system, this correlation cannot be attributed to social influence. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social influence <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social influence <s> Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance. <s> BIB004
This term is often used to capture the notion of a latter individual participant of a social network taking an action that is similar to another former participant's action, by way of the latter explicitly or implicitly imitating the action of the former BIB002 ]. An example of imitation is retweeting on Twitter. Many works in literature model information diffusion taking social influence into consideration, such as [Galuba et al. 2010] , BIB003 , BIB004 and others. Homophily Familiarity is perceived when two or more individuals know each other (or, in the context of online social networks, befriend with each other or connect to each other). Similarity is perceived when two or more of individuals like one or more shared objects, items, topics etc. Homophily is the phenomenon of similar people also becoming socially familiar BIB001 ].
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> For at least twenty‐five years, the concept of the clique has had a prominent place in sociometric and other kinds of sociological research. Recently, with the advent of large, fast computers and with the growth of interest in graph‐theoretic social network studies, research on the definition and investigation of the graph theoretic properties of clique‐like structures has grown. In the present paper, several of these formulations are examined, and their mathematical properties analyzed. A family of new clique‐like structures is proposed which captures an aspect of cliques which is seldom treated in the existing literature. The new structures, when used to complement existing concepts, provide a new means of tapping several important properties of social networks. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Abstract Social network researchers have long sought measures of network cohesion, Density has often been used for this purpose, despite its generally admitted deficiencies. An approach to network cohesion is proposed that is based on minimum degree and which produces a sequence of subgraphs of gradually increasing cohesion. The approach also associates with any network measures of local density which promise to be useful both in characterizing network structures and in comparing networks. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Here we study a variant of maximal clique enumeration problem by incorporating a minimum size criterion. We describe preprocessing techniques to reduce the graph size. This is of practical interest since enumerating maximal cliques is a computationally hard problem and the execution time increases rapidly with the input size. We discuss basics of an algorithm for enumerating large maximal cliques which exploits the constraint on minimum size of the desired maximal cliques. Social networks are prime examples of large sparse graphs where enumerating large maximal cliques is of interest. We present experimental results on the social network formed by the call detail records of one of the world's largest telecom service providers. Our results show that the preprocessing methods achieve significant reduction in the graph size. We also characterize the execution behaviour of our large maximal clique enumeration algorithm. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Twitter is a user-generated content system that allows its users to share short text messages, called tweets, for a variety of purposes, including daily conversations, URLs sharing and information news. Considering its world-wide distributed network of users of any age and social condition, it represents a low level news flashes portal that, in its impressive short response time, has the principal advantage. In this paper we recognize this primary role of Twitter and we propose a novel topic detection technique that permits to retrieve in real-time the most emergent topics expressed by the community. First, we extract the contents (set of terms) of the tweets and model the term life cycle according to a novel aging theory intended to mine the emerging ones. A term can be defined as emerging if it frequently occurs in the specified time interval and it was relatively rare in the past. Moreover, considering that the importance of a content also depends on its source, we analyze the social relationships in the network with the well-known Page Rank algorithm in order to determine the authority of the users. Finally, we leverage a navigable topic graph which connects the emerging terms with other semantically related keywords, allowing the detection of the emerging topics, under user-specified time constraints. We provide different case studies which show the validity of the proposed approach. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Large volumes of spatio-temporal-thematic data being created using sites like Twitter and Jaiku, can potentially be combined to detect events, and understand various 'situations' as they are evolving at different spatio-temporal granularity across the world. Taking inspiration from traditional image pixels which represent aggregation of photon energies at a location, we consider aggregation of user interest levels at different geo-locations as social pixels. Combining such pixels spatio-temporally allows for creation of social images and video. Here, we describe how the use of relevant (media processing inspired) situation detection operators upon such 'images', and domain based rules can be used to decide relevant control actions. The ideas are showcased using a Swine flu monitoring application which uses Twitter data. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> Hashtags are used in Twitter to classify messages, propagate ideas and also to promote specific topics and people. In this paper, we present a linguistic-inspired study of how these tags are created, used and disseminated by the members of information networks. We study the propagation of hashtags in Twitter grounded on models for the analysis of the spread of linguistic innovations in speech communities, that is, in groups of people whose members linguistically influence each other. Differently from traditional linguistic studies, though, we consider the evolution of terms in a live and rapidly evolving stream of content, which can be analyzed in its entirety. In our experimental results, using a large collection crawled from Twitter, we were able to identify some interesting aspects -- similar to those found in studies of (offline) speech -- that led us to believe that hashtags may effectively serve as models for characterizing the propagation of linguistic forms, including: (1) the existence of a "preferential attachment process", that makes the few most common terms ever more popular, and (2) the relationship between the length of a tag and its frequency of use. The understanding of formation patterns of successful hashtags in Twitter can be useful to increase the effectiveness of real-time streaming search algorithms. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Social communities <s> We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 5.96 million topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of 196 million tweets, we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on topic popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively. <s> BIB009
This represent a group of individuals with a large degree of familiarity. The familiarity either follows a certain structure ensuring a notional sufficiently of connections such as maximal cliques BIB003 , k-cores BIB002 ], k-plexes BIB001 etc., or properties such as high modularity where the connection density within the given group is significantly higher compared to the other individuals belonging to the same social network . Topic In general, a topic captures a coherent of set concepts that are semantically/conceptually related to each other. In the context of social network content analysis, a topic notionally corresponds to a set of correlated user-generated concept. In literature, topics are often identified using techniques such as (a) hashtags of microblogs like Twitter (ex: BIB007 ), (b) bursty keyword identification (ex: BIB004 and BIB005 ), and (c) probability distributions of latent concepts over keywords in user generated content (ex: BIB008 ). (Geo-social) Spread of topics This is usually a term used to portray the maximum (or characteristic) geographical span that a topic has reached out, or expected to reach out to. Literature that addresses geo-social spread of topics include BIB009 , , BIB006 ] and many others.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Abstract The Internet has become a rich and large repository of information about us as individuals. Anything from the links and text on a user’s homepage to the mailing lists the user subscribes to are reflections of social interactions a user has in the real world. In this paper we devise techniques and tools to mine this information in order to extract social networks and the exogenous factors underlying the networks’ structure. In an analysis of two data sets, from Stanford University and the Massachusetts Institute of Technology (MIT), we show that some factors are better indicators of social connections than others, and that these indicators vary between user populations. Our techniques provide potential applications in automatically inferring real world connections and discovering, labeling, and characterizing communities. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize... <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the "proximity" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Twitter is a user-generated content system that allows its users to share short text messages, called tweets, for a variety of purposes, including daily conversations, URLs sharing and information news. Considering its world-wide distributed network of users of any age and social condition, it represents a low level news flashes portal that, in its impressive short response time, has the principal advantage. In this paper we recognize this primary role of Twitter and we propose a novel topic detection technique that permits to retrieve in real-time the most emergent topics expressed by the community. First, we extract the contents (set of terms) of the tweets and model the term life cycle according to a novel aging theory intended to mine the emerging ones. A term can be defined as emerging if it frequently occurs in the specified time interval and it was relatively rare in the past. Moreover, considering that the importance of a content also depends on its source, we analyze the social relationships in the network with the well-known Page Rank algorithm in order to determine the authority of the users. Finally, we leverage a navigable topic graph which connects the emerging terms with other semantically related keywords, allowing the detection of the emerging topics, under user-specified time constraints. We provide different case studies which show the validity of the proposed approach. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> With the recent rise in popularity and size of social media, there is a growing need for systems that can extract useful information from this amount of data. We address the problem of detecting new events from a stream of Twitter posts. To make event detection feasible on web-scale corpora, we present an algorithm based on locality-sensitive hashing which is able overcome the limitations of traditional approaches, while maintaining competitive results. In particular, a comparison with a state-of-the-art system on the first story detection task shows that we achieve over an order of magnitude speedup in processing time, while retaining comparable performance. Event detection experiments on a collection of 160 million Twitter posts show that celebrity deaths are the fastest spreading news on Twitter. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Language use is overlaid on a network of social connections, which exerts an influence on both the topics of discussion and the ways that these topics can be expressed (Halliday, 1978). In the past, efforts to understand this relationship were stymied by a lack of data, but social media offers exciting new opportunities. By combining large linguistic corpora with explicit representations of social network structures, social media provides a new window into the interaction between language and society. Our long term goal is to develop joint sociolinguistic models that explain the social basis of linguistic variation. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Streaming user-generated content in the form of blogs, microblogs, forums, and multimedia sharing sites, provides a rich source of data from which invaluable information and insights maybe gleaned. Given the vast volume of such social media data being continually generated, one of the challenges is to automatically tease apart the emerging topics of discussion from the constant background chatter. Such emerging topics can be identified by the appearance of multiple posts on a unique subject matter, which is distinct from previous online discourse. We address the problem of identifying emerging topics through the use of dictionary learning. We propose a two stage approach respectively based on detection and clustering of novel user-generated content. We derive a scalable approach by using the alternating directions method to solve the resulting optimization problems. Empirical results show that our proposed approach is more effective than several baselines in detecting emerging topics in traditional news story and newsgroup data. We also demonstrate the practical application to social media analysis, based on a study on streaming data from Twitter. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Twitter, Facebook, and other related systems that we call social awareness streams are rapidly changing the information and communication dynamics of our society. These systems, where hundreds of millions of users share short messages in real time, expose the aggregate interests and attention of global and local communities. In particular, emerging temporal trends in these systems, especially those related to a single geographic area, are a significant and revealing source of information for, and about, a local community. This study makes two essential contributions for interpreting emerging temporal trends in these information systems. First, based on a large dataset of Twitter messages from one geographic area, we develop a taxonomy of the trends present in the data. Second, we identify important dimensions according to which trends can be categorized, as well as the key distinguishing features of trends that can be derived from their associated messages. We quantitatively examine the computed features for different categories of trends, and establish that significant differences can be detected across categories. Our study advances the understanding of trends on Twitter and other social awareness streams, which will enable powerful applications and activities, including user-driven real-time information services for local communities. © 2011 Wiley Periodicals, Inc. <s> BIB012 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Reducing the impact of seasonal influenza epidemics and other pandemics such as the H1N1 is of paramount importance for public health authorities. Studies have shown that effective interventions can be taken to contain the epidemics if early detection can be made. Traditional approach employed by the Centers for Disease Control and Prevention (CDC) includes collecting influenza-like illness (ILI) activity data from “sentinel” medical practices. Typically there is a 1–2 week delay between the time a patient is diagnosed and the moment that data point becomes available in aggregate ILI reports. In this paper we present the Social Network Enabled Flu Trends (SNEFT) framework, which monitors messages posted on Twitter with a mention of flu indicators to track and predict the emergence and spread of an influenza epidemic in a population. Based on the data collected during 2009 and 2010, we find that the volume of flu related tweets is highly correlated with the number of ILI cases reported by CDC. We further devise auto-regression models to predict the ILI activity level in a population. The models predict data collected and published by CDC, as the percentage of visits to “sentinel” physicians attributable to ILI in successively weeks. We test models with previous CDC data, with and without measures of Twitter data, showing that Twitter data can substantially improve the models prediction accuracy. Therefore, Twitter data provides real-time assessment of ILI activity. <s> BIB013 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter. <s> BIB014 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> L-LDA is a new supervised topic model for assigning "topics" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions. <s> BIB015 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed. <s> BIB016 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 5.96 million topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of 196 million tweets, we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on topic popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively. <s> BIB017 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic lifecycle <s> Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [Pap14]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014. <s> BIB018
This term notionally corresponds to the temporal span that a topic stays alive from being introduced into the social network, reach its peak of geographical spread and social depth, and decline till the point it no longer exists in the network. Several works analyze topic lifecycle, such as BIB014 , BIB005 , BIB018 , BIB006 ] and many more. Topical information diffusion A body of research tends to model information diffusion, seeding from the topics underlying within the information cascade content, such as BIB017 , , and others. These works tend to have the topical nature of information diffusion at the heart of their models. body of research emerged, that attempted to identify topics and spot trending topics being discussed on the online social media. BIB005 designed TwitterMonitor, for detecting and analyzing trends, and studying trend lifecycle. Using a two-stage approach comprised of detecting and clustering new content generated by users, founded on dictionary learning to detect emerging topics on Twitter, BIB011 applied their system on streaming data to empirically demonstrate the effectiveness of their approach. attempted to predict topics that would draw attention in future. Other studies have also been conducted for trend and topic lifecycle analysis on social networks, specifically Twitter, such as BIB018 , BIB014 , BIB012 , and BIB007 . Predicting the existence of social connections between given pairs of individual members of social networks, in form of social links, has been an area of long-standing research. Link prediction algorithms that use graph properties have existed for long. Some well-known link prediction methods are the Adamic-Adar method BIB001 ], Jaccard's coefficient , rooted PageRank BIB003 , Katz method and SimRank [Jeh and Widom 2002] . BIB008 ] investigated the effectiveness of content in social network link prediction, and experimented on Twitter. BIB015 proposed a "supervised topic classification and link prediction system on Twitter". Identifying struc-tural communities that form implicitly based upon familiarity within social networks, rather than by explicit interest-based group memberships, has been another area of long-standing research. There are multiple definitions of communities; however, the modularity method by ] is arguably the most well-known and well-accepted definition. Approximation algorithms to compute modularity fast exist, one of the most well-known algorithms being BGLL proposed by BIB004 . While links and communities are rooted to the notion of familiarity, another popular topic of research in online social networks is homophily BIB002 . Homophily is the phenomenon of similar people also being socially familiar. Studies such as ] considered similarity and social familiarity together, to investigate how information diffusion is impacted by homophily. Understanding social influence, and analyzing its impact on diffusion characteristics in the context of topics and information, such as spread and longevity, has received immense research focus. Several works have investigated online social networks and microblogs, and have created information diffusion models that account for the effect of influence of the participants. BIB009 ] created an influence model using the Flickr social network graph and user action logs. Identifying who influences whom and exploring whether participants would propagate the same information in absence of social signals, BIB016 ] measured the effect of social networking mediums in information dissemination, and validated on 253 million subjects. BIB010 modeled the "global influence" of social network participants, using the rate of information diffusion via the social network. Many other works have explored influence and its impact on social networks, along the aspects of information diffusion, topics, interest and the lifecycle of topics. Addressing the geo-temporal aspects of information diffusion on social networks, researchers have attempted to model the evolution that happens to information and topics over time, and across geographical boundaries. BIB017 ] characterized the diffusion of ideas on social networks by conducting a spatio-temporal analysis. They showed that popular topics tend to cross regional boundaries aggressively. found temporal evolution of topical discussions on Twitter to localize geographically, and evolve more strongly at finer geo-spatial granularities. For instance, they found that, city level discussions evolve more compared to country level. BIB013 used Twitter data to collect data pertaining to influenza-like diseases. Using Twitter data, their model could substantially improve the influenza epidemic predictions made from Government's disease control (CDC) data. Overall, identifying and characterizing topics and information diffusion has received significant research attention. Clearly, significant research attention has been invested towards modeling information diffusion, correlating the phenomenon with network structures, and investigating the roles and impacts of topics, the lifecycle of topics, influence, familiarity, similarity, homophily and spatio-temporal factors. In the current article, we conduct a survey of literature that has created significant impact in this space, and explore the details of some of the models and methods that have been widely adopted by researchers. The aim is to provide an overview of the representative state-of-the-art models, that perform topic analysis, capture information diffusion, and explore the properties of social connections in this context, for online social networks. We believe our article will be useful for researchers to identify the current literature, and help in identifying what can be improved over the state of the art. The rest of the paper is organized as follows. In Section 2, we explore the literature for topic based link prediction and community discovery on social networks. This is followed by a literature survey for information diffusion, and role of user influence, in Section 3. Section 4 covers the literature addressing lifecycle of topics, covering the inception, spread and evolution of the topics. The literature addressing the impact of social familiarity and topical (and interest) similarity is covered in Section 5. The literature for spatio-temporal analysis of social network discussion topics has been surveyed in Section 6. A high-level discussion of problems of potential interest, and problems where we believe existing solutions can be improved, is provided in Section 7.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction and Community Discovery <s> Language use is overlaid on a network of social connections, which exerts an influence on both the topics of discussion and the ways that these topics can be expressed (Halliday, 1978). In the past, efforts to understand this relationship were stymied by a lack of data, but social media offers exciting new opportunities. By combining large linguistic corpora with explicit representations of social network structures, social media provides a new window into the interaction between language and society. Our long term goal is to develop joint sociolinguistic models that explain the social basis of linguistic variation. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction and Community Discovery <s> L-LDA is a new supervised topic model for assigning "topics" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction and Community Discovery <s> Automatic detection of communities (or cohesive groups of actors in social network) in online social media platforms based on user interests and interaction is a problem that has recently attracted a lot of research attention. Mining user interactions on Twitter to discover such communities is a technically challenging information retrieval task. We present an algorithm - iTop - to discover interaction based topic centric communities by mining user interaction signals (such as @-messages and retweets) which indicate cohesion. iTop takes any topic as an input keyword and exploits local information to infer global topic-centric communities. We evaluate the discovered communities along three dimensions: graph based (node-edge quality), empirical-based (Twitter lists) and semantic based (frequent n-grams in tweets). We conduct experiments on a publicly available scrape of Twitter provided by InfoChimps via a web service. We perform a case study on two diverse topics - 'Computer Aided Design (CAD)' and 'Kashmir' to demonstrate the efficacy of iTop. Empirical results from both case studies show that iTop is successfully able to discover topic-centric, interaction based communities on Twitter. <s> BIB003
Link prediction is the problem of predicting the existence of social links amongst social network participant pairs. In traditional literature, the prediction of links has mostly been carried out by investigating social network graph properties. Since information spreads on online social networks over topics of discussions, predicting links based upon information content essentially gives an intuition of the pathway that given content (information) would diffuse. This also holds for communities formed on social network graphs, over links inferred from user-generated topical text content. BIB001 Predicts links based upon user-generated content using LDA. Shows that their content based link prediction outperforms graph-structure based link prediction. BIB002 Creates user profiles from user-generated tweets. Assigns topics to user profiles. Measures similarity of user profile pairs using L-LDA and SVM. Shows that L-LDA outperforms SVM for Twitter user profile classification. Uses profile pair similarity thus obtained as a predictor of social links. BIB003 Discovers topical communities on user-generated messages on Twitter. Mines retweets, replies and mentions as user-generated indicative signals. Infers global topic-specific communities. Shows the effectiveness of their method by evaluating communities across three dimensions, namely graph (friendship connections), empirical (actual user profiles) and semantic (frequent n-grams).
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link prediction problem, and develop approaches to link prediction based on measures the "proximity" of nodes in a network. Experiments on large co-authorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> We develop the relational topic model (RTM), a hierarchical model of both network structure and node attributes. We focus on document networks, where the attributes of each document are its words, that is, discrete observations taken from a fixed vocabulary. For each pair of documents, the RTM models their link as a binary random variable that is conditioned on their contents. The model can be used to summarize a network of documents, predict links between them, and predict words within them. We derive efficient inference and estimation algorithms based on variational methods that take advantage of sparsity and scale with the number of links. We evaluate the predictive performance of the RTM for large networks of scientific abstracts, web documents, and geographically tagged news. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> A significant portion of the world's text is tagged by readers on social bookmarking websites. Credit attribution is an inherent problem in these corpora because most pages have multiple tags, but the tags do not always apply with equal specificity across the whole document. Solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa. This paper introduces Labeled LDA, a topic model that constrains Latent Dirichlet Allocation by defining a one-to-one correspondence between LDA's latent topics and user tags. This allows Labeled LDA to directly learn word-tag correspondences. We demonstrate Labeled LDA's improved expressiveness over traditional LDA with visualizations of a corpus of tagged web pages from del.icio.us. Labeled LDA outperforms SVMs by more than 3 to 1 when extracting tag-specific document snippets. As a multi-label text classifier, our model is competitive with a discriminative baseline on a variety of datasets. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> We propose the first unsupervised approach to the problem of modeling dialogue acts in an open domain. Trained on a corpus of noisy Twitter conversations, our method discovers dialogue acts by clustering raw utterances. Because it accounts for the sequential behaviour of these acts, the learned model can provide insight into the shape of communication in a new medium. We address the challenge of evaluating the emergent model with a qualitative visualization and an intrinsic conversation ordering task. This work is inspired by a corpus of 1.3 million Twitter conversations, which will be made publicly available. This huge amount of data, available only because Twitter blurs the line between chatting and publishing, highlights the need to be able to adapt quickly to a new medium. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> Language use is overlaid on a network of social connections, which exerts an influence on both the topics of discussion and the ways that these topics can be expressed (Halliday, 1978). In the past, efforts to understand this relationship were stymied by a lack of data, but social media offers exciting new opportunities. By combining large linguistic corpora with explicit representations of social network structures, social media provides a new window into the interaction between language and society. Our long term goal is to develop joint sociolinguistic models that explain the social basis of linguistic variation. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> With hundreds of millions of participants, social media services have become commonplace. Unlike a traditional social network service, a microblogging network like Twitter is a hybrid network, combining aspects of both social networks and information networks. Understanding the structure of such hybrid networks and predicting new links are important for many tasks such as friend recommendation, community detection, and modeling network growth. We note that the link prediction problem in a hybrid network is different from previously studied networks. Unlike the information networks and traditional online social networks, the structures in a hybrid network are more complicated and informative. We compare most popular and recent methods and principles for link prediction and recommendation. Finally we propose a novel structure-based personalized link prediction model and compare its predictive performance against many fundamental and popular link prediction methods on real-world data from the Twitter microblogging network. Our experiments on both static and dynamic data sets show that our methods noticeably outperform the state-of-the-art. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> Link prediction and recommendation is a fundamental problem in social network analysis. The key challenge of link prediction comes from the sparsity of networks due to the strong disproportion of links that they have potential to form to links that do form. Most previous work tries to solve the problem in single network, few research focus on capturing the general principles of link formation across heterogeneous networks. In this work, we give a formal definition of link recommendation across heterogeneous networks. Then we propose a ranking factor graph model (RFG) for predicting links in social networks, which effectively improves the predictive performance. Motivated by the intuition that people make friends in different networks with similar principles, we find several social patterns that are general across heterogeneous networks. With the general social patterns, we develop a transfer-based RFG model that combines them with network structure information. This model provides us insight into fundamental principles that drive the link formation and network evolution. Finally, we verify the predictive performance of the presented transfer model on 12 pairs of transfer cases. Our experimental results demonstrate that the transfer of general social patterns indeed help the prediction of links. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Link Prediction <s> L-LDA is a new supervised topic model for assigning "topics" to a collection of documents (e.g., Twitter profiles). User studies have shown that L-LDA effectively performs a variety of tasks in Twitter that include not only assigning topics to profiles, but also re-ranking feeds, and suggesting new users to follow. Building upon these promising qualitative results, we here run an extensive quantitative evaluation of L-LDA. We test the extent to which, compared to the competitive baseline of Support Vector Machines (SVM), L-LDA is effective at two tasks: 1) assigning the correct topics to profiles; and 2) measuring the similarity of a profile pair. We find that L-LDA generally performs as well as SVM, and it clearly outperforms SVM when training data is limited, making it an ideal classification technique for infrequent topics and for (short) profiles of moderately active users. We have also built a web application that uses L-LDA to classify any given profile and graphically map predominant topics in specific geographic regions. <s> BIB009
Several works in literature, such as BIB007 and BIB008 , have addressed predicting social links between pairs of users, looking at the graph attributes. BIB001 BIB005 . However, these studies explore graph structure and properties, and do not consider content semantics. One body of work uses user-generated as the foundation of the link prediction process. In one such work, BIB006 ] study the effectiveness of content in predicting links on social networks, using Twitter data for experiments. Using Twitter's GardenHose API, they collect around 15% of all messages on Twitter, posted in January 2010. The extract a representative subset by sampling the first 500 people who posted at least 16 messages within this period, and subsequently crawl 500 randomly selected followers of each of these people. They end up with a data set comprising of 21, 306 users, 837, 879 messages, and 10, 578, 934 word tokens posted as part of these messages. Subsequently, they tokenize while factoring for the non-standard orthography that is inherent to Twitter messages. They tokenize on whitespaces and apostrophes. They use the # mark to indicate a topic, and the @ mark to indicated retweets. Removing the low-frequency words that appear less than 50 times from the vocabulary, they are left with 11, 425 tokens. They classify out-of-vocabulary items were classified as either words, URLs, or numbers. They use LDA BIB002 ] for predicting pairwise links on the content graph. To do so, they gather together all of the messages from a given user into a single document, as the length of Twitter messages are short. Thus, their model learns latent topics that characterize authors, rather than messages. They subsequently compute author similarity using dot product of topic proportions. They learn weight proportions of each topic z using the method of Chang and Blei BIB003 as exp (−η , as the predicted strength of connection between authors i and j.z i andz j denote the expected topic proportions for author i and j, η denotes a vector of learned regression weights, and ν is an intercept term necessary if a the link prediction function returns a probability. They compare their results with the results obtained by the methodology of LibenNowell and Kleinberg BIB001 , which depends upon the graph structure but not upon user-generated content. The content-based model performs significantly better than the structure-based one, establishing a logical foundation to consider user-generated content as an effective instrument to predict social links. In another work, BIB009 propose a "supervised topic classification and link prediction system on Twitter". They create user profiles based upon the posts made the by the users. Their work uses the Labeled-LDA (L-LDA) technique by BIB004 ], a generative model for multiply labeled corpora that generates a labeled document collection. L-LDA assigns one topic to each label in a multiply-labeled document, unlike traditional LDA and its supervised embodiments. It incorporates supervision to extend LDA BIB002 and incorporates a mixture model to extend Multinomial Naive Bayes. Each document is modeled as a mix of elemental topics by L-LDA. Each word is generated from a topic. The topic model is constrained to only use topics corresponding to a document's observed set of labels. They "set the number of topics in L-LDA as the number of unique labels K in the corpus", and run LDA such that the multinomial mixture distribution θ (d) is defined only for topics corresponding to the labels Λ (d) , the binary list of topics indicating the presence/absence of a topic l inside document d. To enable this constraint, they first generate the document labels Λ (d) for each topic k using a Bernoulli coin toss, with a labeling prior probability Φ k . They subsequently define the document label vector as: If the i th document label and the j th topic are the same, then the (i, j) th element of the L (d) matrix has a value of 1, else zero. The "parameter vector of the Dirichlet prior α = (α 1 , ..., α K ) T " uses the The dimensions of the α (d) vector "correspond to the topics represented by the document labels". Finally, θ (d) is drawn from this Dirichlet distribution. They experiment on Twitter data using the L-LDA technique. They assign topics to user profiles, and measured the similarity of user profile pairs. They find L-LDA to significantly outperform Support Vector Machines (SVM) for user profile classification, in cases where the training data is limited, and provide similar performance as SVM where sufficient training data is available. They thereby infer L-LDA to be a good technique to classify infrequent topics and (short) profiles of users having moderate activity. They treat user profile pair similarities as predictor of social links.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> Although the inference of global community structure in networks has recently become a topic of great interest in the physics community, all such algorithms require that the graph be completely known. Here, we define both a measure of local community structure and an algorithm that infers the hierarchy of communities that enclose a given vertex by exploring the graph one vertex at a time. This algorithm runs in time O(k2d) for general graphs when d is the mean degree and k is the number of vertices to be explored. For graphs where exploring a new vertex is time consuming, the running time is linear, O(k). We show that on computer-generated graphs the average behavior of this technique approximates that of algorithms that require global knowledge. As an application, we use this algorithm to extract meaningful local clustering information in the large recommender network of an online retailer. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth. We study a set of 230 large social, collaboration and information networks where nodes explicitly define group memberships. We use these groups to define the notion of ground-truth communities. We then propose a methodology which allows us to compare and quantitatively evaluate different definitions of network communities on a large scale. We choose 13 commonly used definitions of network communities and examine their quality, sensitivity and robustness. We show that the 13 definitions naturally group into four classes. We find that two of these definitions, Conductance and Triad-participation-ratio, consistently give the best performance in identifying ground-truth communities. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> Automatic detection of communities (or cohesive groups of actors in social network) in online social media platforms based on user interests and interaction is a problem that has recently attracted a lot of research attention. Mining user interactions on Twitter to discover such communities is a technically challenging information retrieval task. We present an algorithm - iTop - to discover interaction based topic centric communities by mining user interaction signals (such as @-messages and retweets) which indicate cohesion. iTop takes any topic as an input keyword and exploits local information to infer global topic-centric communities. We evaluate the discovered communities along three dimensions: graph based (node-edge quality), empirical-based (Twitter lists) and semantic based (frequent n-grams in tweets). We conduct experiments on a publicly available scrape of Twitter provided by InfoChimps via a web service. We perform a case study on two diverse topics - 'Computer Aided Design (CAD)' and 'Kashmir' to demonstrate the efficacy of iTop. Empirical results from both case studies show that iTop is successfully able to discover topic-centric, interaction based communities on Twitter. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic-Based Community Discovery <s> Homophily suggests that people tend to befriend others with shared traits, such as similar topical interests or overlapping social circles. We study how people communicate online in term of conversation topics from an egocentric viewpoint using a dataset from Facebook. We find that friends who favor similar topics form topic-based clusters; these clusters have dense connectivities, large growth rates, and little overlap. <s> BIB006
In the social network analysis literature, communities are identified by one of the following. (a) Individuals subscribe to existing interest groups, and thereby start explicitly belonging to a community based upon their similarity of interests. (b) Groups of individuals known to each other directly, or having a large number of mutual friends, are said to belong to the same implicit community. While several definitions of structural communities have emerged over time, modularity-based com-munity finding ] is the most popular methodology. Modularity-based community finding from a given graph is inherently expensive. BIB003 propose BGLL as a fast approximation algorithm towards this. BIB004 investigate structural and functional communities, and impacts of structure on community functions. Literature mostly explores community discovery from explicit links such as social friendships. However, some work also exists to find communities formed upon links inferred from usergenerated topics and/or content. In one such work, BIB005 ] discover topical communities on Twitter tweets. They mine retweets, replies and mentions, collectively labeling these as @-messages. They create an edge between a vertex (user) pair v x and v y if I(RT xy , @ xy ) = 0, where I(RT xy , @ xy ) is the @-message based interaction strength between v x and v y . They adapt the local modularity (LM) algorithm BIB002 ] for directed graphs, to discover communities of interest using local information. Their framework comprises of four blocks: warm start, expand, filter and iterate. For warm start, they take a topic of interest t i as input, and conducts a Twitter user bio search, where bio comprises of the publicly available profile information of the user such as name, location, URL and biography. The list of users found to have related interest and inclination towards this topic, as found by the search, are included as parts of communities of interest, denoted as C t i current . In the expand step, they take this list of users, and adds vertices U t i , where β t i ∈ C t i current has an edge with at least one vertex in U t i . The weight of an edge is defined by the closeness of the user pair in terms of @-messages. For instance, a directed edge X → Y is drawn from X to Y , iff X has interacted with Y . Further, weight w is assigned based upon the interaction strength. This, graph G max is stable or consistently negative, indicating that there is no further place for improvement. Thus, they identify topic-specific global communities, taking topic as an input keyword. They "evaluate the communities along three dimensions, namely graph (vertex-edge quality), empirical (actual Twitter profiles) and semantic (n-grams frequently appearing in tweets)". In another work, BIB006 , explore the Facebook social network for topic based cluster analysis, and shows that friends that favor similar topics form topic-based clusters. This study further shows that these clusters have dense connectivity, large growth rate, and little overlap. Cross-entropy BIB001 , which is based upon Kullback-Leibler (K-L) divergence , and normalized mutual information , are relevant measurements frequently appearing in literature of communities, user profile pair similarities and topical divergence computation.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Models of collective behavior are developed for situations where actors have two alternatives and the costs and/or benefits of each depend on how many other actors choose which alternative. The key concept is that of "threshold": the number or proportion of others who must make one decision before a given actor does so; this is the point where net benefits begin to exceed net costs for that particular actor. Beginning with a frequency distribution of thresholds, the models allow calculation of the ultimate or "equilibrium" number making each decision. The stability of equilibrium results against various possible changes in threshold distributions is considered. Stress is placed on the importance of exact distributions distributions for outcomes. Groups with similar average preferences may generate very different results; hence it is hazardous to infer individual dispositions from aggregate outcomes or to assume that behavior was directed by ultimately agreed-upon norms. Suggested applications are to riot ... <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Though word-of-mouth (w-o-m) communications is a pervasive and intriguing phenomenon, little is known on its underlying process of personal communications. Moreover as marketers are getting more interested in harnessing the power of w-o-m, for e-business and other net related activities, the effects of the different communications types on macro level marketing is becoming critical. In particular we are interested in the breakdown of the personal communication between closer and stronger communications that are within an individual's own personal group (strong ties) and weaker and less personal communications that an individual makes with a wide set of other acquaintances and colleagues (weak ties). <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Social networks are of interest to researchers in part because they are thought to mediate the flow of information in communities and organizations. Here we study the temporal dynamics of communication using on-line data, including e-mail communication among the faculty and staff of a large university over a two-year period. We formulate a temporal notion of"distance"in the underlying social network by measuring the minimum time required for information to spread from one node to another -- a concept that draws on the notion of vector-clocks from the study of distributed computing systems. We find that such temporal measures provide structural insights that are not apparent from analyses of the pure social network topology. In particular, we define the network backbone to be the subgraph consisting of edges on which information has the potential to flow the quickest. We find that the backbone is a sparse graph with a concentration of both highly embedded edges and long-range bridges -- a finding that sheds new light on the relationship between tie strength and connectivity in social networks. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Whether they are modeling bookmarking behavior in Flickr or cascades of failure in large networks, models of diffusion often start with the assumption that a few nodes start long chain reactions, resulting in large-scale cascades. While reasonable under some conditions, this assumption may not hold for social media networks, where user engagement is high and information may enter a system from multiple disconnected sources. Using a dataset of 262,985 Facebook Pages and their associated fans, this paper provides an empirical investigation of diffusion through a large social media network. Although Facebook diffusion chains are often extremely long (chains of up to 82 levels have been observed), they are not usually the result of a single chain-reaction event. Rather, these diffusion chains are typically started by a substantial number of users. Large clusters emerge when hundreds or even thousands of short diffusion chains merge together. This paper presents an analysis of these diffusion chains using zero-inflated negative binomial regressions. We show that after controlling for distribution effects, there is no meaningful evidence that a start node’s maximum diffusion chain length can be predicted with the user's demographics or Facebook usage characteristics (including the user's number of Facebook friends). This may provide insight into future research on public opinion formation. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Microblogging sites are a unique and dynamic Web 2.0 communication medium. Understanding the information flow in these systems can not only provide better insights into the underlying sociology, but is also crucial for applications such as content ranking, recommendation and filtering, spam detection and viral marketing. In this paper, we characterize the propagation of URLs in the social network of Twitter, a popular microblogging site. We track 15 million URLs exchanged among 2.7 million users over a 300 hour period. Data analysis uncovers several statistical regularities in the user activity, the social graph, the structure of the URL cascades and the communication dynamics. Based on these results we propose a propagation model that predicts which users are likely to mention which URLs. The model correctly accounts for more than half of the URL mentions in our data set, while maintaining a false positive rate lower than 15%. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Spreading of information, ideas or diseases can be conveniently modelled in the context of complex networks. An analysis now reveals that the most efficient spreaders are not always necessarily the most connected agents in a network. Instead, the position of an agent relative to the hierarchical topological organization of the network might be as important as its connectivity. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Social media played a central role in shaping political debates in the Arab Spring. A spike in online revolutionary conversations often preceded major events on the ground. Social media helped spread democratic ideas across international borders.No one could have predicted that Mohammed Bouazizi would play a role in unleashing a wave of protest for democracy in the Arab world. Yet, after the young vegetable merchant stepped in front of a municipal building in Tunisia and set himself on fire in protest of the government on December 17, 2010, democratic fervor spread across North Africa and the Middle East.Governments in Tunisia and Egypt soon fell, civil war broke out in Libya, and protestors took to the streets in Algeria, Morocco, Syria, Yemen and elsewhere. The Arab Spring had many causes. One of these sources was social media and its power to put a human face on political oppression. Bouazizi’s self-immolation was one of several stories told and retold on Facebook, Twitter, and YouTube in ways that inspired dissidents to organize protests, criticize their governments, and spread ideas about democracy. Until now, most of what we have known about the role of social media in the Arab Spring has been anecdotal.Focused mainly on Tunisia and Egypt, this research included creating a unique database of information collected from Facebook, Twitter, and YouTube. The research also included creating maps of important Egyptian political websites, examining political conversations in the Tunisian blogosphere, analyzing more than 3 million Tweets based on keywords used, and tracking which countries thousands of individuals tweeted from during the revolutions. The result is that for the first time we have evidence confirming social media’s critical role in the Arab Spring. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Nodes in real-world networks, such as social, information or technological networks, organize into communities where edges appear with high concentration among the members of the community. Identifying communities in networks has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth. We study a set of 230 large social, collaboration and information networks where nodes explicitly define group memberships. We use these groups to define the notion of ground-truth communities. We then propose a methodology which allows us to compare and quantitatively evaluate different definitions of network communities on a large scale. We choose 13 commonly used definitions of network communities and examine their quality, sensitivity and robustness. We show that the 13 definitions naturally group into four classes. We find that two of these definitions, Conductance and Triad-participation-ratio, consistently give the best performance in identifying ground-truth communities. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Information Diffusion and Role of Influence <s> Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network. <s> BIB011
Diffusion of information content on social networks such as Twitter and Facebook, has been a major research focus BIB005 BIB008 ] BIB004 ]. Several information diffusion models, such as Linear Threshold BIB001 and Independent Cascades BIB002 , and variations of these models, have been built. Models have attempted to capture diffusion path, degree of diffusion for specific information on observed social networks, and the role of influence of participants in the information flow process. Proposes a propagation model predicting which URL will each given user mention, and shows the effectiveness of their model. [ BIB006 ] Identifies a network core using k-shell decomposition analysis, where the more central vertices in the graph receive higher k-values. The innermost vertices form the graph core. Shows that the network core members are best spreaders of information, not the most highly connected or the most centrally located ones. BIB003 Formulates a temporal notion of social network distance measuring the minimum time for information to spread across a given vertex pair. Defines a network backbone, a subgraph in which the information flows the quickest. Shows that the network backbone for information propagation on a social network graph is sparse, with a mix of "highly embedded edges and long-range bridges". [ BIB009 Quantifies the causal effect of social networks in disseminating information, by identifying who influences whom, and exploring whether they would propagate the same information if the social signals were absent. Experiments with information sharing behavior of 253 million users. Shows that while stronger ties are more influential at an individual level, the abundance of weak ties are more responsible for novel information propagation. [ Hypothesizes that homophily affects the core mechanism behind social information propagation. Proposes a dynamic Bayesian network for capturing information diffusion. Shows that considering homophily leads to an improvement of 15%-25% in prediction of information diffusion. [ BIB007 Models the global influence of a node, on the "rate of information diffusion through the implicit social network". Proposes Linear Influence Model, in which a newly infected (informed) node is a "function of other nodes infected in the past". Shows that the patterns of influence of individual participants significantly differs, depending on node type and topic of information. [ BIB010 Explores speed, scale and range as major properties of social network information diffusion. Shows that user properties, and the rate at which a user is mentioned, are predictors of information propagation. Shows that information propagation range for an event is higher for tweets made later. BIB011 Observes that information can flow both through online social networks and sources outside the network such as news media. Models information propagation accordingly. Uses hazard functions to quantify external exposure and influence. Applies the model to URLs emerging on Twitter. Shows that, affected by external influence (and not social edges), information jumps across the Twitter network. Quantifies information jump. Shows that 71% of information diffuses over Twitter network, while 29% happens out of the network.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Preface (B. Bollobas). Paul Erdos at Seventy-Five (B. Bollobas). Packing Smaller Graphs into a Graph (J. Akiyama, F. Nakada, S. Tokunaga). The Star Arboricity of Graphs (I. Algor, N. Alon). Graphs with a Small Number of Distinct Induced Subgraphs (N. Alon, B. Bollobas). Extensions of Networks with Given Diameter (J.-C. Bermond, K. Berrada, J. Bond). Confluence of Some Presentations Associated with Graphs (N. Biggs). Long Cycles in Graphs with No Subgraphs of Minimal Degree 3 (B. Bollobas, G. Brightwell). First Cycles in Random Directed Graph Processes (B. Bollobas, S. Rasmussen). Trigraphs (J.A. Bondy). On Clustering Problems with Connected Optima in Euclidean Spaces (E. Boros, P.L. Hammer). Some Sequences of Integers (P.J. Cameron). 1-Factorizing Regular Graphs of High Degree - An Improved Bound (A.G. Chetwynd, A.J.W. Hilton). Graphs with Small Bandwidth and Cutwidth (F.R.K. Chung, P.D. Seymour). Simplicial Decompositions of Graphs: A Survey of Applications (R. Diestel). On the Number of Distinct Induced Subgraphs of a Graph (P. Erdos, A. Hajnal). On the Number of Partitions of n Without a Given Subsum (I) (P. Erdos, J.L. Nicolas, A. Sarkozy). The First Cycles in an Evolving Graph (P. Flajolet, D.E. Knuth, B. Pittel). Covering the Complete Graph by Partitions (Z. Furedi). A Density Version of the Hales-Jewett Theorem for k = 3 (H. Furstenburg, Y. Katznelson). On the Path-Complete Bipartite Ramsey Number (R. Haggkvist). Towards a Solution of the Dinitz Problem? (R. Haggkvist). A Note on the Latin Squares with Restricted Support (R. Haggkvist). Pseudo-Random Hypergraphs (J. Haviland, A. Thomason). Bouquets of Geometric Lattices: Some Algebraic and Topological Aspects (M. Laurent, M. Deza). A Short Proof of a Theorem of Vamos on Matroid Representations (I. Leader). An On-Line Graph Coloring Algorithm with Sublinear Performance Ratio (L. Lovasz, M. Saks, W.T. Trotter). The Partite Construction and Ramsey Set Systems (J. Nesetril, V. Rodl). Scaffold Permutations (P. Rosenstiehl). Bounds on the Measurable Chromatic Number of R n (L.A. Szekely, N.C. Wormald). A Simple Linear Expected Time Algorithm for Finding a Hamilton Path (A. Thomason). Dense Expanders and Pseudo-Random Bipartite Graphs (A. Thomason). Forbidden Graphs for Degree and Neighbourhood Conditions (D.R. Woodall). <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize... <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Motivated by several applications, we introduce various distance measures between "top k lists." Some of these distance measures are metrics, while others are not. For each of these latter distance measures, we show that they are "almost" a metric in the following two seemingly unrelated aspects: ::: (i) they satisfy a relaxed version of the polygonal (hence, triangle) inequality, and ::: (ii) there is a metric with positive constant multiples that bound our measure above and below. ::: This is not a coincidence---we show that these two notions of almost being a metric are the same. Based on the second notion, we define two distance measures to be equivalent if they are bounded above and below by constant multiples of each other. We thereby identify a large and robust equivalence class of distance measures. ::: Besides the applications to the task of identifying good notions of (dis)similarity between two top k lists, our results imply polynomial-time constant-factor approximation algorithms for the rank aggregation problem with respect to a large class of distance measures. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> We study a map of the Internet (at the autonomous systems level), by introducing and using the method of k-shell decomposition and the methods of percolation theory and fractal geometry, to find a model for the structure of the Internet. In particular, our analysis uses information on the connectivity of the network shells to separate, in a unique (no parameters) way, the Internet into three subcomponents: (i) a nucleus that is a small (≈100 nodes), very well connected globally distributed subgraph; (ii) a fractal subcomponent that is able to connect the bulk of the Internet without congesting the nucleus, with self-similar properties and critical exponents predicted from percolation theory; and (iii) dendrite-like structures, usually isolated nodes that are connected to the rest of the network through the nucleus only. We show that our method of decomposition is robust and provides insight into the underlying structure of the Internet and its functional consequences. Our approach of decomposing the network is general and also useful when studying other complex networks. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Social networks are of interest to researchers in part because they are thought to mediate the flow of information in communities and organizations. Here we study the temporal dynamics of communication using on-line data, including e-mail communication among the faculty and staff of a large university over a two-year period. We formulate a temporal notion of"distance"in the underlying social network by measuring the minimum time required for information to spread from one node to another -- a concept that draws on the notion of vector-clocks from the study of distributed computing systems. We find that such temporal measures provide structural insights that are not apparent from analyses of the pure social network topology. In particular, we define the network backbone to be the subgraph consisting of edges on which information has the potential to flow the quickest. We find that the backbone is a sparse graph with a concentration of both highly embedded edges and long-range bridges -- a finding that sheds new light on the relationship between tie strength and connectivity in social networks. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Operators of online social networks are increasingly sharing potentially sensitive information about users and their relationships with advertisers, application developers, and data-mining researchers. Privacy is typically protected by anonymization, i.e., removing names, addresses, etc.We present a framework for analyzing privacy and anonymity in social networks and develop a new re-identification algorithm targeting anonymized social-network graphs. To demonstrate its effectiveness on real-world networks, we show that a third of the users who can be verified to have accounts on both Twitter, a popular microblogging service, and Flickr, an online photo-sharing site, can be re-identified in the anonymous Twitter graph with only a 12% error rate.Our de-anonymization algorithm is based purely on the network topology, does not require creation of a large number of dummy "sybil" nodes, is robust to noise and all existing defenses, and works even when the overlap between the target network and the adversary's auxiliary information is small. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Twitter, a microblogging service less than three years old, commands more than 41 million users as of July 2009 and is growing fast. Twitter users tweet about any topic within the 140-character limit and follow others to receive their tweets. The goal of this paper is to study the topological characteristics of Twitter and its power as a new medium of information sharing. We have crawled the entire Twitter site and obtained 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets. In its follower-following topology analysis we have found a non-power-law follower distribution, a short effective diameter, and low reciprocity, which all mark a deviation from known characteristics of human social networks [28]. In order to identify influentials on Twitter, we have ranked users by the number of followers and by PageRank and found two rankings to be similar. Ranking by retweets differs from the previous two rankings, indicating a gap in influence inferred from the number of followers and that from the popularity of one's tweets. We have analyzed the tweets of top trending topics and reported on their temporal behavior and user participation. We have classified the trending topics based on the active period and the tweets and show that the majority (over 85%) of topics are headline news or persistent news in nature. A closer look at retweets reveals that any retweeted tweet is to reach an average of 1,000 users no matter what the number of followers is of the original tweet. Once retweeted, a tweet gets retweeted almost instantly on next hops, signifying fast diffusion of information after the 1st retweet. To the best of our knowledge this work is the first quantitative study on the entire Twittersphere and information diffusion on it. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Spreading of information, ideas or diseases can be conveniently modelled in the context of complex networks. An analysis now reveals that the most efficient spreaders are not always necessarily the most connected agents in a network. Instead, the position of an agent relative to the hierarchical topological organization of the network might be as important as its connectivity. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Social networks have emerged as a critical factor in information dissemination, search, marketing, expertise and influence discovery, and potentially an important tool for mobilizing people. Social media has made social networks ubiquitous, and also given researchers access to massive quantities of data for empirical analysis. These data sets offer a rich source of evidence for studying dynamics of individual and group behavior, the structure of networks and global patterns of the flow of information on them. However, in most previous studies, the structure of the underlying networks was not directly visible but had to be inferred from the flow of information from one individual to another. As a result, we do not yet understand dynamics of information spread on networks or how the structure of the network affects it. We address this gap by analyzing data from two popular social news sites. Specifically, we extract social networks of active users on Digg and Twitter, and track how interest in news stories spreads among them. We show that social networks play a crucial role in the spread of information on these sites, and that network structure affects dynamics of information flow. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Social influence can be described as power - the ability of a person to influence the thoughts or actions of others. Identifying influential users on online social networks such as Twitter has been actively studied recently. In this paper, we investigate a modified k-shell decomposition algorithm for computing user influence on Twitter. The input to this algorithm is the connection graph between users as defined by the follower relationship. User influence is measured by the k-shell level, which is the output of the k-shell decomposition algorithm. Our first insight is to modify this k-shell decomposition to assign logarithmic k-shell values to users, producing a measure of users that is surprisingly well distributed in a bell curve. Our second insight is to identify and remove peering relationships from the network to further differentiate users. In this paper, we include findings from our study. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topical Information Diffusion on Social Networks <s> Current social media research mainly focuses on temporal trends of the information flow and on the topology of the social graph that facilitates the propagation of information. In this paper we study the effect of the content of the idea on the information propagation. We present an efficient hybrid approach based on a linear regression for predicting the spread of an idea in a given time frame. We show that a combination of content features with temporal and topological features minimizes prediction error. Our algorithm is evaluated on Twitter hashtags extracted from a dataset of more than 400 million tweets. We analyze the contribution and the limitations of the various feature types to the spread of information, demonstrating that content aspects can be used as strong predictors thus should not be disregarded. We also study the dependencies between global features such as graph topology and content features. <s> BIB012
In a pioneering study, BIB007 suggest information diffuses on Twitter-like social microblogging platforms in a similar manner as news media. They show that, over the original tweet and retweets, and regardless of the followers of the originator of the tweet, a tweet reaches to about 1,000 users on an average. It stimulates the notion that, such microblogging networks are hybrid in nature, where the characteristics of social and information networks get combined. Their dataset comprises 41.7 million Twitter users, 1.47 billion social followership edges and 106 million tweets. They observe that Twitter trends are different from traditional social network trends, with lower than expected degrees of separation, and non-power-law distribution of followers. The reciprocity of Twitter is low, compared to traditional social networks. However, the reciprocated relationships exhibit homophily BIB002 ] to an extent. They rank Twitter users by PageRank of followings, number of followers and retweets. They find that the rankings by PageRank and by number of followers are similar, but ranking by retweets is significantly different. They measure this by using an optimistic approach of the generalization of Kendall's tau proposed by BIB003 ], setting penalty p = 0. They observe that a significant proportion of live news that is of broadcasting nature (such as accidents and sports), breaks out on Twitter ahead of CNN, a traditional online media. They note that around 20% of Twitter users participate in trending topics, and around 15% of the participants participate in more than 10 topics in 4 months. They observe that the active periods of most trends are a week or shorter. They attempt to investigate whether favoritism exists in retweets. For this, assuming user j makes ) over all vertices having made / received k retweets. If followers tend to evenly retweet, then kY (k) ∼ 1. And kY (k) ∼ k if only a subset of followers retweet. Experimentally, they observe linear correlation to k, which indicates retweets to contain favoritism: people retweet only from a small number of people and only a subset of followers of a user tend to retweet. In a way, this indicates only a few users to influence the information to diffuse further via retweets, given the user originating the information with respect to the persons retweeting. [ BIB008 show that the most central or highly connected people are not necessarily the best spreaders of information; often, those located at the network core are. They identify the best spreaders by k-shell decomposition analysis BIB001 ] BIB004 [Seidman 1983]. They further show that, when more than one spreader are considered together, the distance between them plays a critical role in determining the spread level. They apply the Susceptible-Infectious-Recovered (SIR) and Susceptible-Infectious-Susceptible (SIS) models [Heesterbeek 2000] [Hethcote 2000] on four different social networks including an email network in a department of a university in London, a blogging community (LiveJournal.com), a contact network of inpatients in a Swedish hospital and "a network of actors that have co-starred in movies labeled by imdb.com as adult". They use a small value of β , "the probability that an infectious vertex will infect a susceptible neighbor", keeping the infected population fraction small. Using k-shell (k-core) decomposition, they assign coreness k S , an integer index (coreness index), to each vertex of degree k, that captures the depth (layer/k-shell) in the network that the vertex belongs to. The coreness index k S is assigned such that the more centrally the vertex is located in the graph, the higher is its k S value. The innermost vertices thereby form the graph core. If (k S , k) is the coreness and degree of vertex i (origin of the epidemic) and "γ(k S , k) the union of all the N(k S , k) such vertices", then the average population M i infected with the epidemic under SIR-based spreading, averaged over all such origins, is Their analysis finds three general results. (a) A number of poor spreaders exist among the hubs on the network periphery (large k, low k S ). (b) Infected nodes belonging to the same k-shell give rise to similar outbreaks of epidemic, irrespective of the degree of the origin of infection. (c) The "inner core of the network" comprises of the most efficient disease (information spreaders), independent of their degree. They empirically observe that the influence spreading behavior is better predicted by the k-shell index of a node, compared to the entire network considered as a whole, as well as compared to betweenness centrality. An outbreak starting at the network core (large k S ) finds many paths for the information to spread over the whole network, regardless of the degree of the vertex. In a subsequent work, BIB010 modify the k-shell decomposition analysis algorithm to use log-scale mapping, that produces fewer but more appropriate k-shell values. [ BIB005 ] propose a temporal notion of social network distance, using the shortest time needed for information to reach to vertex from another. They find that, structural information that is not evident from analyzing the topology of the social network, can be obtained from such temporal measures. They define a network backbone, a subgraph in which the information flows the quickest, and experimentally show that the network backbone for information propagation on a social network graph is sparse, with a mix of long-range bridges and strongly embedded edges. They demonstrate on two email datasets and user communications across Wikipedia admins and editors. To find the temporal notion of social network distance, they attempt to quantify how updated is each vertex v about each different vertex u at time t. For this, they try to determine the largest t ′ < t such that, information can reach, from vertex u starting at time t ′ , to v at or before time t. The view of v towards u at time t is the largest value of t ′ , denoted by φ v,t (u). They define "information latency of u with respect to v at time t" as "how much v's view of u is out-of-date at time t", quantified as (t − φ v,t (u)). Thus iterating over all vertices, they take the view of v to all the vertices in the graph at time t, and represent it as a single vector φ v,t = (φ v,t (u) : u ∈ V ). They define φ v,t as the vector clock of each vertex v at time t. φ v,t is updated whenever v receives a communication. They define the instantaneous backbone of a network using the concept of essential edges. In the backbone, "an edge (v, w) is essential at time t if the value φ w,t (v) is the result of a vector-clock update directly from v, via some communication event (v, w,t ′ ), where t ′ < t". Intuitively, an edge (v, w) becomes essential if the most updated view of the target (w) of the source (v) is via a direct communication over the edge, rather than via an indirect path over other edges. They define the backbone H t of the graph at time t to have the vertex set V , and the edge set from the original graph G essential at time t. Using this, and assuming a perfectly periodic communication pattern of vertex pairs, they develop a notion of aggregate backbone by aggregating the communication over the entire period of observation. For each edge (v, w) in H t where ρ v,w > 0 (v has sent w at least one message) within time period [0, T ], the delay δ v,w is defined for the edge as T /ρ v,w , which simply approximates the communication from v to w as temporally evenly spaced. They assign weight δ v,w to each edge (v, w), obtaining G δ from G. In this aggregate setting, where communications are spaced evenly, the path where the sum of the delays is minimum is the path over which information would reach the fastest between that pair of vertices. They define essential edges in the aggregate sense in G delta , and define H * , an aggregate backbone, constituted using only these essential aggregate edges. They define the range of an edge (v, w) as the shortest unweighted alternate path from v to w over the social network, if e was deleted. On a typical social network, the value of this is often observed to be 2 as most pairs of social connections tend to have common (shared) friends. They define the embeddedness of an edge (v, w) is intuitively the fraction of neighbors common to both v and w. Formally, if N v and N w respectively denote the neighbor set of v and w, then embeddedness of e is defined as Intuitively, endpoints of edges with high embedding have many common neighbors, hence occupy dense clusters. Experimentally, they find that highly embedded edges are over-represented both in instantaneous and aggregate backbones. These represent edges with high rates of communications. Hence, presence of such edges in the backbone leads to fast information diffusion. They also observe that, increase in node-dependent delays (delays ε introduced at nodes, in addition to the edge delay δ v,w ) in leading to denser backbones. As that happens, the significance of quick indirect paths diminish. They note that, to influence the potential information flow, a practical method for individuals is to consider varying the communication rates by simple rules. [ ] study the impact of user homophily on information diffusion on Twitter data. They hypothesize that, homophily affects the core mechanism behind social information propagation, by structuring the ego-networks and impacting their communication behavior of individuals. They follow a three-step approach. First, for the full social graph (baseline) and filtered graphs using attributes such as activity behavior, location etc., they extract diffusion characteristics along categories such as user-based (volume, number of seeds), topology-based (reach, spread) and time (rate). Second, to predict information diffusion for future time slices, they propose a dynamic Bayesian network. Third, they use the ability of the predicted characteristics of explaining the ground-truth of observed information diffusion, to quantify the impact of homophily. They empirically find that, the cases where homophily was considered, could explain information diffusion and external trends by a margin of 15%-25% lower distortion than the cases where it was not considered. They consider a social action set O = {O 1 , O 2 , ...} (such as, posting a tweet) and a set of attributes A = {a k } (location, organization etc.). They consider four user attributes: location, information role (generators, mediators, receptors), content creation (those making self-related posts versus informers), and activity behavior (actions performed on the social network over a given time period). A pair of users are homophilous if at least one of their attributes matches more than the random expectation of match in the network. They construct an induced subgraph G(a k ) of G by selecting vertices, where G(a k = v) where v is the value of attribute a k ∈ A in the selected vertices. Edges in G are selected in G(a k ), where both the endpoint vertices of the edge are included in G(a k ). The authors define s N (θ ), a "diffusion series on topic θ over time slices t 1 to t N , as a directed acyclic graph", in which the vertices correspond to a subset of social network users, involved in social action O r on topic θ within time t 1 and t N . Vertices are assigned to slots: all vertices associated with time slice t m (t 1 < t m < t N ) are assigned slot l m . They subsequently attempt to characterize diffusion. They extract diffusion characteristics on θ at time slice t N , from each diffusion collection S N (θ ) (defined as {s N (θ )}) and {S N;a k (θ )}, as d N (θ ) and {D N;a k (θ )} respectively. They use eight different measures to quantify diffusion at each given time slice t N : the volume v N (θ ) with respect to topic θ (total volume of contagion present in the graph); participation p N (θ ) that involve in the information diffusion and further trigger other users to diffuse information; dissemination δ N (θ ) that act as seed users of the information diffusion due to unobservable external influence; reach r N (θ ) to which extent a particular topic θ reaches to users by the fraction of slots; spread as ratio of the maximum count of informed vertices found over all slots in the diffusion collection, to the total user count; cascade instances c N (θ ) that defines the fraction of slots in s N (θ ) ∈ S N (θ ) in which the number of new users at slot l m is higher than the previous slot l m−1 ; collection size α N (θ ) as the proportion of the number of diffusion series to the number of connected components; and rate γ N (θ ) as the speed of information diffusion on θ in S N (θ ). For each diffusion collection S N θ and {S N;a k (θ )}, they predict at time slice t N , which users have a higher likelihood of repeating a social action taken at time slice t N+1 . This gives diffusion collections at t N+1 as:Ŝ N+1 (θ ) and {Ŝ N;a k (θ )}∀a k ∈ A. They propose a dynamic Bayesian network, and model the likelihood of action O i at t N+1 using environmental features (activity of a given individual and their friends on a topic θ , and the popularity of the topic θ in the previous time slot t N ) represented by F i,N (θ ) and diffusion collection S i,N+1 (θ ). The goal is to estimate the expectation of social actions: Using first order Markov property, they rewrite this as a probability function: They use the "Viterbi algorithm on the observation-state transition matrix to determine the most likely sequence at t N+1 ", thus predicting the observed action (the first term). They predict the second term, the hidden states, as P( . They subsequently substitute the probability of emission P(O i,N+1 |S i,N+1 ) and P(S i,N+1 |S i,N , F i,N ) to estimate the observed action of u i :Ô i,N+1 . They repeat this for each user for time slice t N+1 . Using G and G(a k ), they "associate edges between the predicted user set, and the users in each diffusion series for the diffusion collections at t N ". They thus obtain the diffusion collection t N+1 , i.e.,Ŝ N+1 (θ ) andŜ N+1;a k (θ ). They measure the distortion between actual diffusion characteristics with the predicted, at t n+1 , using: (a) saturation measurement and (b) utility measurement. Intuitively, to measure the informa-tion content that has diffused into the network on topic θ , saturation measurement is used. Utility measurement, on the other hand, attempts to correlate the prediction with external phenomena such as search and world news trend. Using cumulative distribution functions (CDF) of diffusion volume, they model search and news trend measurement models using Kolmogorov-Smirnov (KS) statistic, given as max(|X − Y |) for a given diffusion D(X,Y ), where X and Y are two vertices of the graph. [ BIB011 observe that real-world information can spread via two different ways: (a) over social network connections and (b) over external sources outside the network, such as the mainstream media. They point out that most of the literature assumes that information only passes over the social network connections, which may not be entirely accurate. They model information propagation, considering that information can reach to individuals along both the possible ways. They develop a model parameter fitting technique using hazard functions [Elandt- , to quantify the level of external exposure and influence. In their setting, event profile captures the "influence of external sources on the network as a function of time". With time, nodes receive "streams of varying intensity of external exposures, governed by event profile λ ext (t)". A node can get infected by each of the exposures, and eventually the node either becomes infected, or the arrival of exposures cease. Neighbors receive exposures from infected nodes. They define exposure curve η(x), which determines how likely it is for a node to get infected with arrival of each exposure, and set out to find the shape of the curve, as well as infer how many exposures external sources generate over time. They model internal exposures using an internal hazard function, as Here i and j are neighbors, and "time t has passed since node i was infected". Intuitively, in their setting, λ int effectively models the time taken by a node to understand that one of its nodes have become infected. The "expected number of internal exposures node i receives by time t" can be derived by summing up these exposures. They model exposure to unobserved external information sources, with varying intensities over time, as event profile, as λ ext (t)dt ≡ P(i receives exposure j ∈ [t,t + dt)). The above holds "for any node i, where t is the time elapsed since the current contagion had first appeared in the network". They "model the arrival of exposures as a binomial distribution". Since users receive both internal and external exposures simultaneously, they use the average of λ ext (t) + λ (i) int (t) to "approximate the flux of exposures as constant in time, such that each time interval has an equal probability of arrival of exposures". The "sum of these events is a standard binomial random variable". If a node receives x exposures where the exposure curve is η(x), then η(x) is computed as: Here ρ 1 ∈ (0, 1] and ρ 2 > 0. Note that, η(0) = 0. This implies, a node can be infected only after being exposed to a contagion. The function is unimodal with an exponential tail. Hence there a critical mass of exposures exists when the contagion is most infectious. This is followed by decay, caused by becoming overexposed/tiresome. Importantly, ρ 1 = max x η(x) measures the infectiousness of a contagion in the network, and ρ 2 = argmax x η(x) measures the contagion's enduring relevancy. For a given node i, the infection time distribution can be built as following. Let F (i) (t) ≡ P(τ i ≤ t) denote "the probability of node i being infected by time t", where node i has been infected at time τ i . Using P (i) exp (n;t), F (i) (t) is derived as Although F (i) (t) is "analogous to the cumulative distribution function of infection probability", it is "not actually a distribution": lim x→∞ η(x) = 0 leads to lim t→∞ F(t) < 1. Their model also ensures that the chance of a node never becoming infected is non-zero, which is realistic. They apply the model to the URLs emerging on Twitter. They observe that information jumps across the Twitter network that the social edges cannot explain, and is necessarily caused by unobservable external influences. They quantify the information jump, noting that around 71% of information diffuses over the Twitter network, while the other 29% happens over external events outside. [ ] create an interactive visualization tool, to visually summarize the opinion diffusion by using a combination of a Sankey [Sankey 1898] graph and a tailored density map, at a topic level. Using an information diffusion model that uses a combination of reach (the average number of people influenced by message published by a given user), amplification (the likelihood that audience responds to a message) and network score (the influence of a users' audience) to measure user influence levels, they characterize the propagation of opinions among many users regarding different topics on social media. BIB006 aim to identify (de-anonymize) users across social networking platforms. They hypothesize that, identifying the profiles of users across multiple social networking platforms would provide more insights into the information diffusion process, by observing the diffusion of information over these multiple platforms at a given time. They demonstrate their hypothesis using Twitter and Flickr in combination. BIB012 attempt to predict the spread of ideas on Twitter, combining topological and temporal features with content features, for minimizing errors. BIB009 empirically study the characteristics of news spreading on several popular social networks, such as Twitter and Digg. propose a multi-class classification model to identify popular messages on Twitter, by predicting retweet quantities, from TF-IDF (term frequency and inverted document frequency) and LDA, along with social properties of users.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Models of collective behavior are developed for situations where actors have two alternatives and the costs and/or benefits of each depend on how many other actors choose which alternative. The key concept is that of "threshold": the number or proportion of others who must make one decision before a given actor does so; this is the point where net benefits begin to exceed net costs for that particular actor. Beginning with a frequency distribution of thresholds, the models allow calculation of the ultimate or "equilibrium" number making each decision. The stability of equilibrium results against various possible changes in threshold distributions is considered. Stress is placed on the importance of exact distributions distributions for outcomes. Groups with similar average preferences may generate very different results; hence it is hazardous to infer individual dispositions from aggregate outcomes or to assume that behavior was directed by ultimately agreed-upon norms. Suggested applications are to riot ... <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Networks of coupled dynamical systems have been used to model biological oscillators1,2,3,4, Josephson junction arrays5,6, excitable media7, neural networks8,9,10, spatial games11, genetic control networks12 and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them ‘small-world’ networks, by analogy with the small-world phenomenon13,14 (popularly known as six degrees of separation15). The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> We consider the combinatorial optimization problem of finding the most influential nodes on a large-scale social network for two widely-used fundamental stochastic diffusion models. It was shown that a natural greedy strategy can give a good approximate solution to this optimization problem. However, a conventional method under the greedy algorithm needs a large amount of computation, since it estimates the marginal gains for the expected number of nodes influenced by a set of nodes by simulating the random process of each model many times. In this paper, we propose a method of efficiently estimating all those quantities on the basis of bond percolation and graph theory, and apply it to approximately solving the optimization problem under the greedy algorithm. Using real-world large-scale networks including blog networks, we experimentally demonstrate that the proposed method can outperform the conventional method, and achieve a large reduction in computational cost. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Social influence determines to a large extent what we adopt and when we adopt it. This is just as true in the digital domain as it is in real life, and has become of increasing importance due to the deluge of user-created content on the Internet. In this paper, we present an empirical study of user-to-user content transfer occurring in the context of a time-evolving social network in Second Life, a massively multiplayer virtual world. We identify and model social influence based on the change in adoption rate following the actions of one's friends and find that the social network plays a significant role in the adoption of content. Adoption rates quicken as the number of friends adopting increases and this effect varies with the connectivity of a particular user. We further find that sharing among friends occurs more rapidly than sharing among strangers, but that content that diffuses primarily through social influence tends to have a more limited audience. Finally, we examine the role of individuals, finding that some play a more active role in distributing content than others, but that these influencers are distinct from the early adopters. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user's influence on others — a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Influence is a complex and subtle force that governs the dynamics of social networks as well as the behaviors of involved users. Understanding influence can benefit various applications such as viral marketing, recommendation, and information retrieval. However, most existing works on social influence analysis have focused on verifying the existence of social influence. Few works systematically investigate how to mine the strength of direct and indirect influence between nodes in heterogeneous networks. To address the problem, we propose a generative graphical model which utilizes the heterogeneous link information and the textual content associated with each node in the network to mine topic-level direct influence. Based on the learned direct influence, a topic-level influence propagation and aggregation algorithm is proposed to derive the indirect influence between nodes. We further study how the discovered topic-level influence can help the prediction of user behaviors. We validate the approach on three different genres of data sets: Twitter, Digg, and citation networks. Qualitatively, our approach can discover interesting influence patterns in heterogeneous networks. Quantitatively, the learned topic-level influence can greatly improve the accuracy of user behavior prediction. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Social media forms a central domain for the production and dissemination of real-time information. Even though such flows of information have traditionally been thought of as diffusion processes over social networks, the underlying phenomena are the result of a complex web of interactions among numerous participants. Here we develop the Linear Influence Model where rather than requiring the knowledge of the social network and then modeling the diffusion by predicting which node will influence which other nodes in the network, we focus on modeling the global influence of a node on the rate of diffusion through the (implicit) network. We model the number of newly infected nodes as a function of which other nodes got infected in the past. For each node we estimate an influence function that quantifies how many subsequent infections can be attributed to the influence of that node over time. A nonparametric formulation of the model leads to a simple least squares problem that can be solved on large datasets. We validate our model on a set of 500 million tweets and a set of 170 million news articles and blog posts. We show that the Linear Influence Model accurately models influences of nodes and reliably predicts the temporal dynamics of information diffusion. We find that patterns of influence of individual participants differ significantly depending on the type of the node and the topic of the information. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Recently, there has been tremendous interest in the phenomenon of influence propagation in social networks. The studies in this area assume they have as input to their problems a social graph with edges labeled with probabilities of influence between users. However, the question of where these probabilities come from or how they can be computed from real social network data has been largely ignored until now. Thus it is interesting to ask whether from a social graph and a log of actions by its users, one can build models of influence. This is the main problem attacked in this paper. In addition to proposing models and algorithms for learning the model parameters and for testing the learned models to make predictions, we also develop techniques for predicting the time by which a user may be expected to perform an action. We validate our ideas and techniques using the Flickr data set consisting of a social graph with 1.3M nodes, 40M edges, and an action log consisting of 35M tuples referring to 300K distinct actions. Beyond showing that there is genuine influence happening in a real social network, we show that our techniques have excellent prediction performance. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> In this paper we investigate the attributes and relative influence of 1.6M Twitter users by tracking 74 million diffusion events that took place on the Twitter follower graph over a two month interval in 2009. Unsurprisingly, we find that the largest cascades tend to be generated by users who have been influential in the past and who have a large number of followers. We also find that URLs that were rated more interesting and/or elicited more positive feelings by workers on Mechanical Turk were more likely to spread. In spite of these intuitive results, however, we find that predictions of which particular user or URL will generate large cascades are relatively unreliable. We conclude, therefore, that word-of-mouth diffusion can only be harnessed reliably by targeting large numbers of potential influencers, thereby capturing average effects. Finally, we consider a family of hypothetical marketing strategies, defined by the relative cost of identifying versus compensating potential "influencers." We find that although under some circumstances, the most influential users are also the most cost-effective, under a wide range of plausible assumptions the most cost-effective performance can be realized using "ordinary influencers"---individuals who exert average or even less-than-average influence. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Online social networking technologies enable individuals to simultaneously share information with any number of peers. Quantifying the causal effect of these mediums on the dissemination of information requires not only identification of who influences whom, but also of whether individuals would still propagate information in the absence of social signals about that information. We examine the role of social networks in online information diffusion with a large-scale field experiment that randomizes exposure to signals about friends' information sharing among 253 million subjects in situ. Those who are exposed are significantly more likely to spread information, and do so sooner than those who are not exposed. We further examine the relative role of strong and weak ties in information propagation. We show that, although stronger ties are individually more influential, it is the more abundant weak ties who are responsible for the propagation of novel information. This suggests that weak ties may play a more dominant role in the dissemination of information online than currently believed. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> As a new communication paradigm, social media has promoted information dissemination in social networks. Previous research has identified several content-related features as well as user and network characteristics that may drive information diffusion. However, little research has focused on the relationship between emotions and information diffusion in a social media setting. In this paper, we examine whether sentiment occurring in social media content is associated with a user's information sharing behavior. We carry out our research in the context of political communication on Twitter. Based on two data sets of more than 165,000 tweets in total, we find that emotionally charged Twitter messages tend to be retweeted more often and more quickly compared to neutral ones. As a practical implication, companies should pay more attention to the analysis of sentiment related to their brands and products in social media communication as well as in designing advertising content that triggers emotions. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> This study investigates the communication patterns and network structure of influential opinion leaders on Twitter during the 2011 Seoul mayoral elections. Among the two candidates, we focus on the usage pattern of Wonsoon Park, who actively used Twitter during the election campaign. We analyzed the network structure of candidate Park and his 15 Twitter mentors during the election period (September 26, 2011 - October 26, 2011). The gathered data consists of 19,227 tweets from 8,547 users who were responded to by one of the 17 selected opinion leaders through mentions (@) or retweets (RT). To find the authorities and hubs, which play a crucial role in information propagation, the HITS algorithm was used to quantify the influence exerted by the opinion leaders. In addition, social network triads were used to identify the communication patterns between individual users on Twitter. Results of the analysis showed that the structure of the communication patterns in Twitter were mostly fragmented rather than transitive. This signified that communication occurred from, or converged to, a single node, rather than circulating through multiple nodes during the election period. The majority of the network structures were fragmented, or one-way conversations. In other words, communication happened in the form of aggregation and propagation, rather than sharing and circulating various ideas. <s> BIB012 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> Social influence analysis on microblog networks, such as Twitter, has been playing a crucial role in online advertising and brand management. While most previous influence analysis schemes rely only on the links between users to find key influencers, they omit the important text content created by the users. As a result, there is no way to differentiate the social influence in different aspects of life (topics). Although a few prior works do support topic-specific influence analysis, they either separate the analysis of content from the analysis of network structure, or assume that content is the only cause of links, which is clearly an inappropriate assumption for microblog networks. To address the limitations of the previous approaches, we propose a novel Followship-LDA (FLDA) model, which integrates both content topic discovery and social influence analysis in the same generative process. This model properly captures the content-related and content-independent reasons why a user follows another in a microblog network. We demonstrate that FLDA produces results with significantly better precision than existing approaches. Furthermore, we propose a distributed Gibbs sampling algorithm for FLDA, and demonstrate that it provides excellent scalability on large clusters. Finally, we incorporate the FLDA model in a general search framework for topic-specific influencers. A user freely expresses his/her interest by typing a few keywords, the search framework will return a ranked list of key influencers that satisfy the user's interest. <s> BIB013 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> The Role of Influence <s> The use of Social Media, particularly microblogging platforms such as Twitter, has proven to be an effective channel for promoting ideas to online audiences. In a world where information can bias public opinion it is essential to analyse the propagation and influence of information in large-scale networks. Recent research studying social media data to rank users by topical relevance have largely focused on the “retweet”, “following” and “mention” relations. In this paper we propose the use of semantic profiles for deriving influential users based on the retweet subgraph of the Twitter graph. We introduce a variation of the PageRank algorithm for analysing users' topical and entity influence based on the topical/entity relevance of a retweet relation. Experimental results show that our approach outperforms related algorithms including HITS, InDegree and Topic-Sensitive PageRank. We also introduce VisInfluence, a visualisation platform for presenting top influential users based on a topical query need. <s> BIB014
Social influence plays a significant role in information diffusion dynamics BIB001 [ BIB002 ]. Research has attempted to investigate information cascade flow along underlying social connection graphs, and analyze the role of influence in such propagation. BIB005 ] explore influence on Twitter based on indegree, mentions and retweets. They find that individuals with high indegree do not necessarily generate many mentions and retweets. They observe that while majority of influential users tend to influence several topics, influence is gathered through focused efforts, such as limiting tweets to one topic. BIB009 ] study influencing behavior in terms of cascade spread on Twitter. They find that the past influence of users and the interestingness of content can be used to predict the influencers. They observe that although URLS rated interesting, and content by influential users, spread more than average, no reliable method exists for predicting which user or URL will generate large cascades. BIB006 ] study social influence in large scale networks using a topical sum-product algorithm, and investigate the impact of topics in social influence propagation. ] study the role of passivity and propose a PageRank like measure to find influence on Twitter. ] too propose a PageRank like measure to quantify influence on Twitter, based on link reciprocity and homophily. BIB013 and BIB014 ] conduct topic-specific influence analyses for microblogs. [ Galuba et al. 2010 ] characterize the propagation of URLs on Twitter, and predict information cascades, factoring for the influence of users on one another. Tracking 2.7 million users exchanging over 15 million URLs, they show statistical regularities to be present in social graph, activity of users, URL cascade structure and communication dynamics. They look at URL sharing activities such as URL mentions by users in their tweets, URL popularity (how frequently they appears in tweets) and user activity (how frequently they mention URLs). They define two information cascade types. In F-cascade, the flow of URLs are constrained to the follower graph. They draw an edge between a vertex pair v1 and v2 iff: (a) "v1 and v2 tweeted about URL u", (b) "v1 mentioned u before v2", and (c) "v2 is a follower of v1". In RT-cascade, they use a who-credits-whom model. They disregard the follower graph, and draw an edge between v1 and v2 iff: (a) "v1 tweeted about URL u", (b) "v1 mentioned u before v2", and (c) "v2 credited v1 as the source of u". Using this, they proposed a propagation model predicting which URLs are likely to be mentioned by which users. They construct two information diffusion models. The At-Least-One (ALO) model assumes it sufficient to cause a user to tweet by influence of one user. Retweet probability in ALO model is computed as is the "baseline probability of user i tweeting any URL" and γ u ∈ [0, 1] is the virality of URL u. Intuitively, A is the probability of one of the following, given u is a viral URL (γ u ): (a) followee j(α ji ) has influenced user i and tweeted u with probability p u j , or (b) user i tweets it under influence of an unobserved entity (or tweets spontaneously). The time-dependent component T is defined using a log-normal distribution, given complementary error function er f c, as The linear threshold model (LT) they propose generalizes over ALO. The cumulative influence from all the followees need to exceed a per-node threshold they introduce, for the user to tweet. The A component is therefore replaced by The sigmoid s(x) = 1 1+e −a(b−x) serves as a continuous thresholding function. They optimize parameters by training over using an iterative gradient ascent method, and infer the accuracy of prediction of the information (URL) cascades using F-score -the harmonic mean of precision and recall. BIB010 ] quantify the causal effect of social networking mediums in disseminating information, by identifying who influences whom, as well as by exploring whether individuals would propagate the same information if the social signals were absent. They come up with two interesting findings, performing field-experiments on information sharing behavior over 253 million subjects on Facebook, that visited the site at least once between August 14 th to October 4 th , 2010. (a) They find that the ones exposed to given information on social media, are significantly likely to participate in propagate the information online, and do so sooner than those who are not exposed. (b) They further show that, while the stronger ties are more influential at an individual level, the abundance of weak ties ] are more responsible for novel information propagation, indicating that a dominant role is played by the weak ties in online information dissemination. Their experiment focuses on finding how much exposure of a URL to a user is needed on their Facebook (feed) (a dashboard on the Facebook user pane, where the user is presented with information content, and a platform-level capability to share content with others), for the user to share the URL, beyond the expected correlations among Facebook friends. Before displaying, they randomly assign subject-URL pairs to feed versus no-feed conditions, such that the number of no-feed is twice the number of feed. Stories that are assigned the no-feed condition, but have a URL, are never displayed feed. And the ones assigned the feed condition are displayed on the user feed and are never removed. They measure how exposure increases sharing behavior. They find that sharing has a likelihood of 0.191% in condition of feed and 0.025% in no-feed. They note that the the likelihood of sharing is 7.37 more for those in the feed condition. They observe that links tend to be shared immediately upon exposure by those in the feed condition; however, those in no-feed condition share links over a marginally longer time period. They observe that link-sharing probability goes up as more of one's contacts share a given link, under feed conditions. On the other hand, in no-feed, a link shared by multiple friends is likely to be shared by a user, even if the user has not observed the sharing behavior of friends. This indicates a mixture of internal influence and external correlation in information (link) sharing behavior. The authors explore the impact of strength of ties in the diffusion of the information (URL sharing). Studying individuals who have only one friend that has shared a link previously, they observe that both for feed and no-feed conditions, link sharing is more likely by an individual, when her friend who shared happens to be a strong tie. This effect is seen to be more prominent in no-feed, indicating that strength of ties is a can better predict activities with external correlation than predicting influence on feed. They observe that, "individuals are more likely to share content when influenced by their stronger ties on their feed, and share content under such influence that they would not otherwise share". They further observe that the strength of weak ties ] plays a significant role in consuming and transmitting information that is not likely to be transmitted and get exposed to much of the network otherwise, which increases the information propagation diversity. BIB007 propose an approach to model the "global influence of a node on the rate of information diffusion through the underlying social network". To this, they propose Linear Influence Model (LIM), in which a newly infected (informed) node is modeled as a "function of other nodes infected in the past". For each node, they "estimate an influence function, to model the number of subsequent infections as a function of the other nodes infected in the past". They formulate their model in a non-parametric manner, transforming their setting into a simple least squares problem. This can scale to solving large datasets. They validate their model on 500 million tweets and 170 million news articles and blog posts. They show that node influences are modeled accurately by LIM, and the temporal dynamics of diffusion of information are also predicted reliably. They observe that the influence patterns of participants significantly differ with node types and information topics. In LIM, as information diffuses, a node u is treated as infected from the point of time t u that it adopts (first mentions) the information. This enables LIM to be independent of the underlying network. Volume V (t) is defined in their setting to be the "number of nodes mentioning the information at time t". They "model the volume over time as a function of which other nodes have mentioned the information beforehand". They assign a "non-negative influence function" I u (l) to each node, that denotes the number of follow-up mentions, l time units beyond the adoption of the information by node u. The volume V (t) then becomes "the sum of properly aligned influence functions of nodes u, at time t u (t u < t)": Here A(t) is the set of nodes that are "already active (infected, influenced)". They propose two approaches for modeling I u (l). In a parametric approach, they propose that "I u (l) would follow a specific parametric form", such as a exponential I u (l) = c u e −λ u l or power law I u (l) = c u l −α u , and parameters will depend upon node u. They observe that the drawback of the parametric approach is that, it makes the over-simplified assumption that all the nodes would follow the same parametric form. In a non-parametric approach, they do not assume any shape of the influence functions; the appropriate shapes are found by the the model estimation procedure. They consider time as a discrete vector of length L (a total of L time slots), where the "l th value represents the value of I u (l)". To estimate the LIM model parameters, they start by marking M u,k (t) = 1 if k, the contagion, reached node u at time t, and M u,k (t) = 0 otherwise. Since "volume V t (k) of contagion k at time t is defined as the number of nodes infected by k at time t", they have They subsequently rectify their model to account for the information recency (novelty) phenomenon, that nodes tend to ignore old and obsolete information and adopt recent and novel information. To model how much more or less influential a node is when it mentions the information, they use a multiplicative factor α(t). This is the α-LIM model, represented as: Here "α(t) is the same for all contagions", and is is expected to "start low, quickly peak and slowly decay". They note that the "resulting matrix equation is convex in in I u (l) when α(t) is fixed and in α(t) when I u (l) is fixed". Hence for estimating the "I u (l) and T values of vector α(t)", they apply coordinate descent, iterating between "fixing α(t) and solving for I u (l), and then fixing I u (l) and solving for α(t)". They also account for imitation, where everyone talks about a popular information, introducing the notion of latent volume: the volume which is caused by factors other than influence. They add b(t), a factor to model the latent volume, and thereby create the B-LIM model, which is linear in I u (l) and b(t), as [Yang and Counts 2010] explore three core properties of social network information diffusion, namely speed, scale and range. They collect Twitter data from July 8 th 2009 to August 8 th 2009, for 3, 243, 437 unique users and 22, 241, 221 posts. They explore the ongoing social interactions of users on Twitter, as denoted by the @username mentions (replies) and retweets, which represents active user interaction. To measure how topics propagate through network structures in Twitter, they construct a diffusion network based on mentions. That is, they create an edge from A to B, if B mentions A in her tweet that contains topic C that A had talked about earlier. Thus, they approximate the path of person A diffusing information about topic C. They develop models for speed, scale and range. For speed analysis, they attempt to understand whether and when followers would be influenced and thereby reply, retweet or otherwise mention the original tweet. They investigate the impact of user and tweet features on the speed of diffusion, using the regression model of . They observe that "some properties of tweets predict greater information propagation, but user properties, and specifically the rate that a user is historically mentioned, are equal or stronger predictors". For scale analysis, they attempt to understand how many people in the network mentioned the same topics as the neighbors of the topic originator. They find the number of mentions of a user to be the strongest predictor for information propagation speed (how quickly a tweet produces an offspring tweet) and scale (the number of offspring tweets a given tweet produces). For range analysis, they trace topics through the propagation chains, and count the number of hops. They observe that the range of information propagation (the number of social hops that information reaches on a diffusion network) is tied to the number of user mentions and when the tweets come in the observation sequence. The tweets that come later often are seen to be more influential: those travel further over the network. [ BIB008 ] build an influence model using the Flickr social network graph and user action logs. They propose a technique to predict the time within which a given user would be expected to conduct an action. Other studies, such as BIB004 ], BIB003 ], and , provide significant insights into flow of information and influence, along social edges, over Twitter user interactions. Further, other research works have attempted to model influence of content generated by users, on content generated by other users. ], for instance, explores bloggers' networks for modeling influence propagation. BIB011 explore the correlation of sentiments that Twitter users express and their information sharing behavior, experimenting on political communication data. From 2011 Seoul (Korea) mayoral elections data of a particular candidate who had used Twitter extensively, BIB012 show that, rather than sharing and circulating several ideas, the communica-tion had taken place in form of aggregation and propagation. The communication pattern structures were fragmented rather than transitive, signifying that during the election period, the communication in general had occurred from or converged to a single node, and mostly did not circulate through multiple nodes.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We present two methodologies for the detection of emerging trends in the area of textual data mining. These manual methods are intended to help us improve the performance of our existing fully automatic trend detection system [3]. The first methodology uses citations traces with pruning metrics to generate a document set for an emerging trend. Following this, threshold values are tested to determine the year that the trend emerges. The second methodology uses web resources to identify incipient emerging trends. We demonstrate with a confidence level of 99% that our second approach results in a significant improvement in the precision of trend detection. Lastly we propose the integration of these methods for both the improvement of our existing fully automatic approach as well as in the deployment of our semi-automated CIMEL [20] prototype that employs emerging trends detection to enhance multimedia-based Computer Science education. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Most of the existing document and web search engines rely on keyword-based queries. To find matches, these queries are processed using retrieval algorithms that rely on word frequencies, topic recentness, document authority, and (in some cases) available ontologies. In this paper, we propose an innovative approach to exploring text collections using a novel keywords-by-concepts (KbC) graph, which supports navigation using domain-specific concepts as well as keywords that are characterizing the text corpus. The KbC graph is a weighted graph, created by tightly integrating keywords extracted from documents and concepts obtained from domain taxonomies. Documents in the corpus are associated to the nodes of the graph based on evidence supporting contextual relevance; thus, the KbC graph supports contextually informed access to these documents. In this paper, we also present CoSeNa (Context-based Search and Navigation) system that leverages the KbC model as the basis for document exploration and retrieval as well as contextually-informed media integration. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We describe a system that monitors social and mainstream media to determine shifts in what people are thinking about a product or company. We process over 100,000 news articles, blog posts, review sites, and tweets a day for mentions of items (e.g., products) of interest, extract phrases that are mentioned near them, and determine which of the phrases are of greatest possible interest to, for example, brand managers. Case studies show a good ability to rapidly pinpoint emerging subjects buried deep in large volumes of data and then highlight those that are rising or falling in significance as they relate to the firms interests. The tool and algorithm improves the signal-to-noise ratio and pinpoints precisely the opportunities and risks that matter most to communications professionals and their organizations. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> In this article we explore the behavior of Twitter users under an emergency situation. In particular, we analyze the activity related to the 2010 earthquake in Chile and characterize Twitter in the hours and days following this disaster. Furthermore, we perform a preliminary study of certain social phenomenons, such as the dissemination of false rumors and confirmed news. We analyze how this information propagated through the Twitter network, with the purpose of assessing the reliability of Twitter as an information source under extreme circumstances. Our analysis shows that the propagation of tweets that correspond to rumors differs from tweets that spread news because rumors tend to be questioned more than news by the Twitter community. This result shows that it is posible to detect rumors by using aggregate analysis on tweets. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were "on the ground" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Twitter, Facebook, and other related systems that we call social awareness streams are rapidly changing the information and communication dynamics of our society. These systems, where hundreds of millions of users share short messages in real time, expose the aggregate interests and attention of global and local communities. In particular, emerging temporal trends in these systems, especially those related to a single geographic area, are a significant and revealing source of information for, and about, a local community. This study makes two essential contributions for interpreting emerging temporal trends in these information systems. First, based on a large dataset of Twitter messages from one geographic area, we develop a taxonomy of the trends present in the data. Second, we identify important dimensions according to which trends can be categorized, as well as the key distinguishing features of trends that can be derived from their associated messages. We quantitatively examine the computed features for different categories of trends, and establish that significant differences can be detected across categories. Our study advances the understanding of trends on Twitter and other social awareness streams, which will enable powerful applications and activities, including user-driven real-time information services for local communities. © 2011 Wiley Periodicals, Inc. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Hashtags are used in Twitter to classify messages, propagate ideas and also to promote specific topics and people. In this paper, we present a linguistic-inspired study of how these tags are created, used and disseminated by the members of information networks. We study the propagation of hashtags in Twitter grounded on models for the analysis of the spread of linguistic innovations in speech communities, that is, in groups of people whose members linguistically influence each other. Differently from traditional linguistic studies, though, we consider the evolution of terms in a live and rapidly evolving stream of content, which can be analyzed in its entirety. In our experimental results, using a large collection crawled from Twitter, we were able to identify some interesting aspects -- similar to those found in studies of (offline) speech -- that led us to believe that hashtags may effectively serve as models for characterizing the propagation of linguistic forms, including: (1) the existence of a "preferential attachment process", that makes the few most common terms ever more popular, and (2) the relationship between the length of a tag and its frequency of use. The understanding of formation patterns of successful hashtags in Twitter can be useful to increase the effectiveness of real-time streaming search algorithms. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Online social networking websites such as Twitter and Facebook often serve a breaking-news role for natural disasters: these websites are among the first ones to mention the news, and because they are visited by millions of users regularly the websites also help communicate the news to a large mass of people. In this paper, we examine how news about these disasters spreads on the social network. In addition to this, we also examine the countries of the Tweeting users. We examine Twitter logs from the 2010 Philippines typhoon, the 2011 Brazil flood and the 2011 Japan earthquake. We find that although news about the disaster may be initiated in multiple places in the social network, it quickly finds a core community that is interested in the disaster, and has little chance to escape the community via social network links alone. We also find evidence that the world at large expresses concern about such largescale disasters, and not just countries geographically proximate to the epicenter of the disaster. Our analysis has implications for the design of fund raising campaigns through social networking websites. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> As social media continue to grow, the zeitgeist of society is increasingly found not in the headlines of traditional media institutions, but in the activity of ordinary individuals. The identification of trending topics utilises social media (such as Twitter) to provide an overview of the topics and issues that are currently popular within the online community. In this paper, we outline methodologies of detecting and identifying trending topics from streaming data. Data from Twitter's streaming API was collected and put into documents of equal duration using data collection procedures that allow for analysis over multiple timespans, including those not currently associated with Twitter-identified trending topics. Term frequency-inverse document frequency analysis and relative normalised term frequency analysis were performed on the documents to identify the trending topics. Relative normalised term frequency analysis identified unigrams, bigrams, and trigrams as trending topics, while term frequency-inverse document frequency analysis identified unigrams as trending topics. Application of these methodologies to streaming data resulted in F-measures ranging from 0.1468 to 0.7508. <s> BIB012 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Microblog services have emerged as an essential way to strengthen the communications among individuals and organizations. These services promote timely and active discussions and comments towards products, markets as well as public events, and have attracted a lot of attentions from organizations. In particular, emerging topics are of immediate concerns to organizations since they signal current concerns of, and feedback by their users. Two challenges must be tackled for effective emerging topic detection. One is the problem of real-time relevant data collection and the other is the ability to model the emerging characteristics of detected topics and identify them before they become hot topics. To tackle these challenges, we first design a novel scheme to crawl the relevant messages related to the designated organization by monitoring multi-aspects of microblog content, including users, the evolving keywords and their temporal sequence. We then develop an incremental clustering framework to detect new topics, and employ a range of content and temporal features to help in promptly detecting hot emerging topics. Extensive evaluations on a representative real-world dataset based on Twitter data demonstrate that our scheme is able to characterize emerging topics well and detect them before they become hot topics. <s> BIB013 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Among the vast information available on the web, social media streams capture what people currently pay attention to and how they feel about certain topics. Awareness of such trending topics plays a crucial role in multimedia systems such as trend aware recommendation and automatic vocabulary selection for video concept detection systems. Correctly utilizing trending topics requires a better understanding of their various characteristics in different social media streams. To this end, we present the first comprehensive study across three major online and social media streams, Twitter, Google, and Wikipedia, covering thousands of trending topics during an observation period of an entire year. Our results indicate that depending on one's requirements one does not necessarily have to turn to Twitter for information about current events and that some media streams strongly emphasize content of specific categories. As our second key contribution, we further present a novel approach for the challenging task of forecasting the life cycle of trending topics in the very moment they emerge. Our fully automated approach is based on a nearest neighbor forecasting technique exploiting our assumption that semantically similar topics exhibit similar behavior. We demonstrate on a large-scale dataset of Wikipedia page view statistics that forecasts by the proposed approach are about 9-48k views closer to the actual viewing statistics compared to baseline methods and achieve a mean average percentage error of 45-19% for time periods of up to 14 days. <s> BIB014 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Lifecycle on Social Media <s> Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [Pap14]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014. <s> BIB015
Trend discovery from digital media text has been a research problem of significant scientific interest for long, and is still of active interest BIB012 BIB004 [ ] BIB001 . Trend and topic propagation is one of the key factors that are associated with information diffusion on online social networks. Identifying topics and trends successfully will help in solving different practical problems. Natural disaster analysis and recovery is one such area, explored by ] BIB005 ] BIB010 . BIB006 ] empirically explore how Twitter can contribute to situational awareness, over two natural hazard events, namely Oklahoma Grassfires of April 2009 and Red River Floods of March and April 2009. Early identification of online discussion topics of customers, can help organizations better understand and grow their products and services, as well as control damage early BIB013 . Of late, one of the key areas within this research area has been the detection of topics and trends in microblogs such as Twitter, which are often associated with one topic or a few related topics. A number of research studies have been conducted, predominantly since 2010, that attempt to identify trends and topics and watch them evolve and spread in social networks. [ BIB007 present one of the early research works in detecting Twitter trends in real-time, and analyzing the lifecycle of the trends. They define bursty keywords as "keywords that suddenly appear in tweets at unusually high rates". Subsequently, they define a trend as a "set of bursty keywords frequently occurring together in tweets". Their system, TwitterMonitor, follows a two-step Twitter trend detection mechanism. It also has a third step for analyzing the detected trends. In the first step, they identify keywords suddenly appearing in tweets at unusually high rates, namely the bursty keywords. In order to identify bursty keywords effectively, they propose an algorithm named QueueBurst, based on queuing theory. The QueueBurst algorithm reads streaming data in one pass, and detects the bursty keywords in real time. It protects against spam and spurious bursts, where, by coincidence, a keyword appears in several tweets within a short time period. Subsequently, in the second step, they group the bursty keywords into trends, based upon cooccurrences of the keywords. They compute a set of bursty keywords K i at every time instant t, that can possibly be a part of a trend (or even the same trend). They periodically group keywords k ∈ K t into disjoint subsets K i t of K t , so that all keywords in the same subset are grouped under the same discussion topic. Given subsets K i t , a single subset k i can identify a trend. Thus, they identify trends as a group of bursty keywords that frequently occur together. Identifying more keywords related with a given trend using content extraction algorithms, identifying frequently cited news sources and adding such sources to the trend description, and exploiting geographical locality attributes of the origin of tweets contributing to the identified trends (such as ThanksGiving in Canada will make it likely that a large proportion of the tweets originate from Canada), they produce a chart illustrating the evolution of popularity of the trend during its lifecycle. [ BIB011 ] propose a methodology for online topic modeling, for tracking emerging events for Twitter, that considers a constant evolution of topics over time, and is amenable to dynamic changes in vocabulary. To this, they propose an online variant of the traditional LDA BIB002 ] method, which is enhanced by (P(z|w)), the "posterior distribution over assignments of words to topics", by . The online version of LDA they propose, processes the inputs and periodically updates their model. It produces topics comparable across different periods, that enables measuring topic shifts. Further, the size of topics does not grow with time. They summarize the traditional LDA along with the incorporation of Griffiths and Experiments with injecting novel events on-the-fly, and shows that the model is capable of detecting topics under such settings. BIB008 Creates taxonomy of geographical area-specific trends, based upon Twitter messages collected from the given areas. Identifies significant dimensions to enable trend categorization, and distinguishing features of trends. Empirically establishes the existence of significant differences in computed features for different trend categories. [ BIB015 Filters tweets based on the length and structure of the messages, removing noisy tweets and vocabulary. Combines with hierarchical tweet clustering, dynamic dendogram cutting and ranking of the clusters. Computes pairwise distance of tweets by normalizing the tweet term matrix and applying cosine similarity. Feeds the output into clustering. Selects the first tweet in each of the first 20 clusters as topic headlines. Re-clusters the headlines to avoid topic fragmentation. Shows that length and structure based aggressive filtering of tweets, combined with clustering the tweets hierarchically and ranking the resulting clusters, works well for detecting and labeling events. [ BIB003 ] Proposes a real-time emergent topic detection technique expressed by communities. Analyzes the authority of the content source using PageRank, and models term life cycles using an aging technique. Experiments with Twitter data of 2 days, and identifies the 5 top emergent terms at a given time slot for demonstrating an example of their model output. BIB009 Studies propagation and dynamic evolution of hashtags. Motivated by the concept of linguistic innovation that models language transformation, it defines hashtag innovation as a transformation of the hashtag. Observes that individuals seeking to assign a term not yet used for this purpose for categorizing their message, tend to create new hashtags. Observes the rich-gettingricher phenomena: a few hashtags tend to attract most of the attention. Models information flow over event clusters on social media. Identifies social discussion threads by identifying social and content-based connection across event clusters, and applying temporal filters on these clusters. Shows that topical discussions grow and evolve along social connections over time, rather than at random. BIB014 Uses historical time series data from multiple semantically similar topics to forecast the lifecycle of trending topics as they emerge. Uses nearest neighbor sequence matching, considering historical events that occurred with a similar time span. Studies Twitter, Google, and Wikipedia, three primary online social media streams, over thousands of topics and a year, to observe the emerging trends for empirically validating their process.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> An interval-based temporal logic is introduced, together with a computationally effective reasoning algorithm based on constraint propagation. This system is notable in offering a delicate balance between <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Topic Detection and Tracking (TDT) is a research initiative that aims at techniques to organize news documents in terms of news events. We propose a method that incorporates simple semantics into TDT by splitting the term space into groups of terms that have the meaning of the same type. Such a group can be associated with an external ontology. This ontology is used to determine the similarity of two terms in the given group. We extract proper names, locations, temporal expressions and normal terms into distinct sub-vectors of the document representation. Measuring the similarity of two documents is conducted by comparing a pair of their corresponding sub-vectors at a time. We use a simple perceptron to optimize the relative emphasis of each semantic class in the tracking and detection decisions. The results suggest that the spatial and the temporal similarity measures need to be improved. Especially the vagueness of spatial and temporal terms needs to be addressed. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human-andmachine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Most of the existing document and web search engines rely on keyword-based queries. To find matches, these queries are processed using retrieval algorithms that rely on word frequencies, topic recentness, document authority, and (in some cases) available ontologies. In this paper, we propose an innovative approach to exploring text collections using a novel keywords-by-concepts (KbC) graph, which supports navigation using domain-specific concepts as well as keywords that are characterizing the text corpus. The KbC graph is a weighted graph, created by tightly integrating keywords extracted from documents and concepts obtained from domain taxonomies. Documents in the corpus are associated to the nodes of the graph based on evidence supporting contextual relevance; thus, the KbC graph supports contextually informed access to these documents. In this paper, we also present CoSeNa (Context-based Search and Navigation) system that leverages the KbC model as the basis for document exploration and retrieval as well as contextually-informed media integration. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> With the recent rise in popularity and size of social media, there is a growing need for systems that can extract useful information from this amount of data. We address the problem of detecting new events from a stream of Twitter posts. To make event detection feasible on web-scale corpora, we present an algorithm based on locality-sensitive hashing which is able overcome the limitations of traditional approaches, while maintaining competitive results. In particular, a comparison with a state-of-the-art system on the first story detection task shows that we achieve over an order of magnitude speedup in processing time, while retaining comparable performance. Event detection experiments on a collection of 160 million Twitter posts show that celebrity deaths are the fastest spreading news on Twitter. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Twitter is a user-generated content system that allows its users to share short text messages, called tweets, for a variety of purposes, including daily conversations, URLs sharing and information news. Considering its world-wide distributed network of users of any age and social condition, it represents a low level news flashes portal that, in its impressive short response time, has the principal advantage. In this paper we recognize this primary role of Twitter and we propose a novel topic detection technique that permits to retrieve in real-time the most emergent topics expressed by the community. First, we extract the contents (set of terms) of the tweets and model the term life cycle according to a novel aging theory intended to mine the emerging ones. A term can be defined as emerging if it frequently occurs in the specified time interval and it was relatively rare in the past. Moreover, considering that the importance of a content also depends on its source, we analyze the social relationships in the network with the well-known Page Rank algorithm in order to determine the authority of the users. Finally, we leverage a navigable topic graph which connects the emerging terms with other semantically related keywords, allowing the detection of the emerging topics, under user-specified time constraints. We provide different case studies which show the validity of the proposed approach. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Streaming user-generated content in the form of blogs, microblogs, forums, and multimedia sharing sites, provides a rich source of data from which invaluable information and insights maybe gleaned. Given the vast volume of such social media data being continually generated, one of the challenges is to automatically tease apart the emerging topics of discussion from the constant background chatter. Such emerging topics can be identified by the appearance of multiple posts on a unique subject matter, which is distinct from previous online discourse. We address the problem of identifying emerging topics through the use of dictionary learning. We propose a two stage approach respectively based on detection and clustering of novel user-generated content. We derive a scalable approach by using the alternating directions method to solve the resulting optimization problems. Empirical results show that our proposed approach is more effective than several baselines in detecting emerging topics in traditional news story and newsgroup data. We also demonstrate the practical application to social media analysis, based on a study on streaming data from Twitter. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Twitter, Facebook, and other related systems that we call social awareness streams are rapidly changing the information and communication dynamics of our society. These systems, where hundreds of millions of users share short messages in real time, expose the aggregate interests and attention of global and local communities. In particular, emerging temporal trends in these systems, especially those related to a single geographic area, are a significant and revealing source of information for, and about, a local community. This study makes two essential contributions for interpreting emerging temporal trends in these information systems. First, based on a large dataset of Twitter messages from one geographic area, we develop a taxonomy of the trends present in the data. Second, we identify important dimensions according to which trends can be categorized, as well as the key distinguishing features of trends that can be derived from their associated messages. We quantitatively examine the computed features for different categories of trends, and establish that significant differences can be detected across categories. Our study advances the understanding of trends on Twitter and other social awareness streams, which will enable powerful applications and activities, including user-driven real-time information services for local communities. © 2011 Wiley Periodicals, Inc. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Hashtags are used in Twitter to classify messages, propagate ideas and also to promote specific topics and people. In this paper, we present a linguistic-inspired study of how these tags are created, used and disseminated by the members of information networks. We study the propagation of hashtags in Twitter grounded on models for the analysis of the spread of linguistic innovations in speech communities, that is, in groups of people whose members linguistically influence each other. Differently from traditional linguistic studies, though, we consider the evolution of terms in a live and rapidly evolving stream of content, which can be analyzed in its entirety. In our experimental results, using a large collection crawled from Twitter, we were able to identify some interesting aspects -- similar to those found in studies of (offline) speech -- that led us to believe that hashtags may effectively serve as models for characterizing the propagation of linguistic forms, including: (1) the existence of a "preferential attachment process", that makes the few most common terms ever more popular, and (2) the relationship between the length of a tag and its frequency of use. The understanding of formation patterns of successful hashtags in Twitter can be useful to increase the effectiveness of real-time streaming search algorithms. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Among the vast information available on the web, social media streams capture what people currently pay attention to and how they feel about certain topics. Awareness of such trending topics plays a crucial role in multimedia systems such as trend aware recommendation and automatic vocabulary selection for video concept detection systems. Correctly utilizing trending topics requires a better understanding of their various characteristics in different social media streams. To this end, we present the first comprehensive study across three major online and social media streams, Twitter, Google, and Wikipedia, covering thousands of trending topics during an observation period of an entire year. Our results indicate that depending on one's requirements one does not necessarily have to turn to Twitter for information about current events and that some media streams strongly emphasize content of specific categories. As our second key contribution, we further present a novel approach for the challenging task of forecasting the life cycle of trending topics in the very moment they emerge. Our fully automated approach is based on a nearest neighbor forecasting technique exploiting our assumption that semantically similar topics exhibit similar behavior. We demonstrate on a large-scale dataset of Wikipedia page view statistics that forecasts by the proposed approach are about 9-48k views closer to the actual viewing statistics compared to baseline methods and achieve a mean average percentage error of 45-19% for time periods of up to 14 days. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Steyvers [Griffiths and Steyvers 2004] methodology as <s> Twitter has become as much of a news media as a social network, and much research has turned to analyzing its content for tracking real-world events, from politics to sports and natural disasters. This paper describes the techniques we employed for the SNOW Data Challenge 2014, described in [Pap14]. We show that aggressive filtering of tweets based on length and structure, combined with hierarchical clustering of tweets and ranking of the resulting clusters, achieves encouraging results. We present empirical results and discussion for two different Twitter streams focusing on the US presidential elections in 2012 and the recent events about Ukraine, Syria and the Bitcoin, in February 2014. <s> BIB012
Here n(d,t) and n(t, w) respectively denote the assignment counts of topic t in document d and of word w to topic t, excluding the current assignment z. To transform method into an online (streamed) one, they propose a model that can process the input and update itself periodically. They use time-slices k t , and a "sliding window L that retains documents for a given number of previous time slices". As time slice k t+1 arrives, they "resample topic assignments z for all documents in window L" to update the model, using the θ and φ values from the earlier model in time slice k t for serving as "Dirichlet priors α ′ and β ′ in the evolved model in time slice k t+1 ". They introduce c (0 ≤ c ≤ 1), a contribution factor, to "enable their model to have a set of constantly evolving topics", where c = 0 indicates that the model is run without any parameter learned previously. The time window ensures that their topic model remains sensitive to topic changes with time. To accommodate dynamic vocabulary, they remove words falling below a frequency threshold and add new words satisfying the threshold, along time slices. For previously seen documents and words, the "Dirichlet priors α ′ and β ′ in the new model in time slice k t+1 " are given by: For new documents and words, it is calculated as α ′ dt = α 0 and β ′ tw = β 0 . Here α ′ dt and β ′ tw are the priors for topic t in document d and word w in topic t respectively, n(d,t) and n(t, w) are the number of assignments in the earlier model of time slice k t , and "D old , N old and W new are respectively the number of documents previously processed, number of tokens in those documents and vocabulary size, in time window L". They normalize to enable maintaining a "constant sum of priors across different processing batches", i.e., ∑ α ′ = ∑ α = D × T × α 0 and ∑ β ′ = ∑ β = T ×W × β 0 . For tracking events that are emerging, they measure the shifts (degree of change) in the topic model (evolution of topic) "between the word distribution of each topic t before and after an update", using Jensen-Shannon (JS) Divergence. If the shift exceeds a threshold, they classify a topic as novel. They demonstrate their model using synthetic datasets on Twitter, by mixing real-life Twitter data stream (not annotated) and TREC Topic Detection and Tracking (TDT) corpus (annotated) data. For experiments, they collect data using Twitter's streaming API from September 2011 to January 2012, having 12 million tweets spanning over 1.39 users. They also apply their model to "a series of Twitter feeds, to detect topics popular in specific locations". For experiments, the length of a time slice and window size and respectively set to 1 day and 2 days. They find the detected popular topics to closely follow local and global news events. They observe that, topics expressed as multinomial distributions over terms, are more descriptive compared to strings or single hashtags. Thus, they show that their model is capable of detecting emerging topics under such settings. BIB006 ] create a locality-sensitive hashing technique, to detect new events from a stream of posts in Twitter. Their approach is empirically shown to be an order of magnitude faster compared to the state-of-the-art, while retaining performance. BIB008 ] use dictionary learning to detect emerging topics on Twitter. They use a two-stage approach to detect and cluster new content generated by users. They apply their system on streaming data, showing the effectiveness of their approach. ] use the approach of BIB006 ], but filter using Wikipedia, reducing the number of spurious topics that often get detected by the topic detection systems. They empirically show that events within Wikipedia tend to lag behind Twitter. BIB009 ] characterize emerging trends on Twitter. They develop a taxonomy of geographical area-specific trends, based upon Twitter messages collected from the given geographic areas. They denote the Twitter-given trends as T tw . They collect Twitter's local trending terms. They identify the highest trending terms using a message and term frequency (t f ) pair, such that the message set contains at least 100 messages. They identify bursts via terms that appear more frequently than expected in a given message set, within a given time period. They score a term by subtracting the expected number of occurrences of all terms, from the occurrence count of the term. They retain each term that would score in the top 30 for a given day in a given week, for a sufficiently large number of hours. They assemble the scores to assign a score to each bursty trend comprising of a set of such terms. They add these terms to T tw , and pick the top 1, 500 trends to form T t f . The authors run qualitative and quantitative analyses for a selective (random) subset of T tw and T t f , as they observe that computing on the whole would be prohibitively expensive. They select trends that: (a) reflect the trend diversity present in the source sets, and (b) are human-interpretable, inspecting the Twitter messages associated. They take a set union of the trends selected, denoted as T , and split into two subsets, T Qual and T Quant , to perform qualitative and quantitative analysis respectively. They associate tweet messages with trends by aligning the messages with trend peak times and the surrounding 72 hours before and after. They observe M t = 1350 in T quant , that is, 1, 350 tweet messages on an average are associated with each trend t. They broadly classify trends into two: exogenous trends that capture activities, interests and event originating outside Twitter, and endogenous trends that capture Twitter-only events that do not exist outside Twitter. Exogenous trends comprise of global news events, broadcast media events, national holidays, memorable days and local participation-based (physical) events, while endogenous trends comprise of retweets, memes and activities of fan communities. To characterize the two types of trends, they derive different types of features. This includes 7 content features based upon the content of messages in M t , 3 interaction features based upon the @-username interactions amongst users, 4 time-based features that vary across trends and capture the temporal patterns of information spread, 3 participation features based upon authorship of messages associated with given trends, and 7 social network features that are built upon the followers of each message, for messages belonging to M t . They empirically establish the existence of significant differences in the set of features for different categories of trends. They show that exogenous trends have higher URL proportions, smaller hashtag proportions, fewer retweets, fewer social connections between authors and different (temporal) head periods compared to endogenous trends. They show that breaking news has more retweets (forwards), lesser replies (conversations) and more rapid temporal growth compared to other exogenous trends, as well as different social network features. They notice local events to have denser social networks, higher connectivity, more social reciprocity, and more replies, compared to other exogenous trends. They further noticed memes to have higher connectivity and more reciprocity compared to retweet trends, for endogenous events. [ BIB012 One, they conduct aggressive filtering of tweets and terms, in order to remove tweets containing noise and to respect vocabulary. They normalize tweet text and remove user mentions, URLs, digits, hashtags and punctuations. They tokenize by whitespaces, remove stopwords, and append hashtags, user mentions and de-noised text tokens. From the tweets thus obtained, they remove the tweets with (a) more than 2 user mentions, or (b) more than 2 hashtags, or (c) less than 4 tokens. The intuition is to eliminate tweets with too many user mentions or hashtags, but too little clean information content (text). Effectively, this acts as noise elimination. For vocabulary filtering, they remove user mentions, and retain bi-grams and tri-grams that are present in at least a threshold (10) number of tweets. They subsequently retain tweets with at least 5 words that are in vocabulary, in order to retain tweets that can be meaningfully clustered and eliminate tweets with little vocabulary. Two, they combine this with hierarchical tweet clustering, dynamic dendogram cutting and ranking of the clusters. They compute pairwise distance of tweets by normalizing the tweet term matrix and applying cosine similarity. They perform topic-based clustering of tweets on the output using the distance thus obtained. They cut the resulting dendogram empirically fixing at 0.5, avoiding too tight or too loose clusters and topic fragmentation. They rank the resulting clusters. They observe that ranking the clusters by size, and labeling these clusters as trending topics, does not yield good results, as the topics are casual and repetitive, and by inspection appear unlikely to make news headlines. As an alternative approach, they use the d f − id f t formula of , that approximates the current window term frequency by the average term frequency of the past t time windows, as For experiments, they set the history size t = 4. They assign a high weight to the id f t term to recognized named entities, as they observe such assignments tend to retrieve more news-like topics. They select the first tweet of each of the first 20 ranked clusters as the headline of the topics detected. They re-cluster the headlines to avoid topic fragmentation. They finally present the raw tweet content of the headline (without URLs) with the earliest publication time, as the final topic headline. BIB007 propose a real-time emergent topic detection technique expressed by communities. They define a term as a topic. They define a topic as emerging if it had not occurred rarely in the past but frequently in a specified time interval. They extract the tweet content in form of term vectors with relative frequencies. For this, they associate a tweet vector − → tw j to each tweet tw j to express all the knowledge expressed by a tweet, where each of the vector components represents a weighted term extracted from − → tw j . They retain all keywords, and attempt to highlight keywords are potentially of high relevance for a topic, but appear less frequently. Tweet vector − → tw j is defined as − → tw j = {w j,1 , w j,2 , ..., w j,v , }, where K t is the corpus vocabulary in time interval I t , the vocabulary size is v = |K t | and the x th term of vocabulary of the j th post has a weight w j,x . Based on the social relationships of active users (content authors), they define a directed graph and compute their authority using PageRank BIB002 ]. For each topic (term), they model the topic lifecycle using an aging technique, leveraging the authority of users, thereby studying its usage in a specific interval of time. Each tweet provides nutrition to the contained words, depending upon the authority of the user who made the tweet. Using keyword k ∈ K t and the tweet set TW t k ∈ TW t having term k at the time interval I t , the amount of nutrition is defined as Here w k, j denotes the weight of the term k in tweet vector − → tw j , the function user(tw j ) gives the author u for tweet tw j , and the authority score of user u is auth(u). Thus, they evaluate term usage frequency to quantify term usage behavior, and analyze author influence to qualify term relevance. They formulate an age-dependent energy of a keyword using the nutrition difference across pairs of time intervals. They define a term as hot if the term is used extensively within a given time interval, and emergent if it is hot in the current interval of time but never hot earlier. Clearly, a keyword that has been hot over more than one time interval, then it will not be identified as emergent after the first temporal interval. They limit the number of previous time slots to consider using a threshold. They propose two techniques for selecting an emerging term set within a given time interval -a supervised technique and an unsupervised one. They use a notion of critical drop BIB005 ] to identify emergent topics, and proceed to label topics using a minimal set of keywords. Critical drop is obtained as: In a supervised setting that lets the user choose a permissible threshold for drop, they define EK t , the set of emerging keywords, as: In an unsupervised model, they automatically set the value of this drop dynamically, by computing the average drop over successive entries for the keywords ranking higher than the maximum drop point detected, and marking the first higher-than-average drop as the critical drop. They define topic as a "minimal set of a terms, related semantically to an emerging keyword". Emerging terms are mapped to emerging topics, by studying the semantic relationships amongst the keywords in K t extracted within interval I t , using co-occurrence information. They associate a correlation vector − → cv t k , defining the relationships of k with all the other keywords in the interval I t , in form of a weighted term set. They create topic graph T G t using the correlation vectors, as a directed and weighted graph, where the nodes are labeled with the keywords. Using a weight-based adaptive cut-off, they retain only the edges representing the strongest relationships, and discard the rest. They detect emerging topics using the topological structure of T G t . For this, they discover the strongly connected components that are rooted on the emerging keyword set EK t in T G t . They define subgraph ET t z (K z , E z , ρ) as the emerging topic related to each emerging keyword z ∈ EK t . This subgraph comprises a set of keywords, that are related to z semantically, in time interval I t . ρ k,z represents "the relative weight of the keyword k in the corresponding vector − → cv t k " -the "role of keyword z in context of keyword k". Here, the set of keywords K t z that belong to ET t z , the emerging topic, is obtained by "considering as starting point in T G t for the emerging keyword z, but also contains a set of common terms semantically related to z that are not necessarily included in EK t ". Thus they have some keywords indirectly correlated with the emerging keywords. They rank the topics, in order to identify which topic is more emergent in the interval, as Finally, they perform unsupervised keyword ranking, to choose the most representative keywords for each cluster. They experiment with Twitter data of 2 days, and identify the 5 top emergent terms at a given time slot for demonstrating an example of their model output. BIB010 ] study dynamic evolution of Twitter hashtags. Specifically, they investigate the creation, use and dissemination of hashtags by the members of information networks of Twitter hashtags. They study hashtag propagation in social groups where members are known to influence each other linguistically. They take a live and rapidly evolving content stream, and analyze the evolution of terms (hashtags). They collect Twitter data of 55 million users, leading to 2 billion followership edges, out of which they find 1.7 billion to be usable. They compare "features of the variation of hashtags to linguistic variation". They collect data from interchangeable hashtags that refer to the same event or topic, and would have been considered to be the same in a more controlled setting than Twitter. For instance, #michaeljackson, #mj and #jackson are hashtags referring to the same topic (subject). They select topics, and form bases by filtering tweets such that a chosen tweet will have at least one hashtag, and at least one term that is well-known to be related to the topic (such as, jackson when referring to Michael Jackson). Motivated by the concept of linguistic innovation ] that models transformation of any language attribute such as phonetics, phonology, syntax, semantics, etc., the authors define hashtag innovation as a transformation of the hashtag. They observe that individuals seeking to assign a term not yet used for this purpose for categorizing their message, tend to create new hashtags; such as, to tag (name) an action or object that they are unfamiliar with in the physical (offline) world. They observe the presence of the rich-get-richer phenomenon ]: a few hashtags tend to attract most of the attention, with only around 10% of the hashtags getting used more than 10 times, and as many as 60% of the hashtags getting used only once. They observe that hashtags that gain the maximum popularity tend to be direct, short in length and simple, while many of the less popular hashtags are formed by long character strings. They also clearly observe that the difference in lengths of the top few popular tags are irrelevant. However, they compare between the more popular and less popular hashtags and conclude that the number of characters in a given hashtag, a linguistic (and internal) feature, determines the success/failure of the hashtag on Twitter. model information flow over topics on social media, using empirical evidence found from natural disaster and political event datasets of Twitter. They introduce the notion of social discussion threads by creating event clusters on Twitter data, connecting across these clusters based upon contemporary external news sources about the events under consideration, and examining the social and temporal relationships across cluster pairs. They identify conversations by exploring social, semantic and temporal relationships of these clusters. Their model also looks at temporal evolution of the topics as they evolve in the social network, over discussions. They represent an event as where K i is the keyword set extracted from the tweets belonging to event E i , and T i is the event time period. K contains the proper nouns (extracted using PoS tagging) and id f vector from the tweets. Thus, each event becomes a cluster of tweet messages. They define extended semantic relationships across event cluster pairs, connecting the pairs with information obtained from contemporary external document corpus such as Google News. They generate |K i | × |K j | keyword pairs that need to be evaluated for extended semantic relationship, pruning semantically related pairs such as synonyms, antonyms, hypernyms and hyponyms in order to avoid skewed results. They use the Wordnet lexical database to compute similarity of keyword pairs, and retain keyword pairs with sufficient similarities. They find contemporary external documents in which both the keywords occur. They compute a document pair coupling score, such that, "if C(K i l , D t ) is the tf-idf score of word K i in document D t , the pairwise coupling score is given They calculate the coupling score of a pair of keywords as the average coupling score across all documents. Extending this to all keyword pairs for a given event cluster pair E i and E j , if w i j keyword pairs were retained and the rest were pruned, they compute the overall score of connection of the event pair as In their setting, a person P belongs to an event cluster E i , iff P posts a message M, such that M ∈ E i . This allows a person to belong to multiple clusters simultaneously. An edge is created between clusters E i and E j , if person P i ∈ E i , P j ∈ E j , and (P i , P j ) is a social followership edge in the input Twitter graph. If E i and E j have P i and P j memberships respectively, the average neighbor count in E j (E i ) of an individual in E i (E j ) is a i j (a ji ), then the edge (E i , E j ) has a strength of P i .a i j + P j .a ji . They create two kinds of temporal relationships across event cluster pairs, drawing from Allen's temporal relationship list BIB001 ]. They create a "temporal edge from event E i to event E j , if E j starts within a threshold time gap after E i ends", and set this gap to 2 days for experiments, and label as follows. This is effectively the set union of Allen's meet and disjoint relationships. They also create the temporal overlap relationship of Allen across cluster pairs. They propose a two-step process for identifying social discussion threads that evolve topically. First, they construct the semantic AND temporal graph by taking edge set intersection of event cluster pairs, considering direction, to form discussion sequences. Next, they construct the semantic AND temporal AND social graph, by also intersecting the social edges. This retains the socially connected discussion sequences, and discards the others, thereby identifying social discussion threads. They extract modularity-based communities from the discussion sequences as well as the social discussion threads, and find the normalized mutual information (NMI) ] of the two. Over multiple datasets, they show that this NMI value is significantly higher, compared to the NMI value across the communities found in the input social and semantic graphs. They claim this as evidence of topical discussions growing and evolving along social connections over time, rather than at random, even for events of large scale where randomness of user participation and discussion is likely. They also qualitatively show that discussion threads tend to localize in social communities. BIB011 propose an approach to forecast the life cycle of trending topics as they emerge. They observe popular terms from 10 different sources, including 5 Google channels, 3 Twitter channels and 2 Wikipedia channels. Retrieving 10-20 feeds per day (total 110 topics per day), they observe over thousands of topics and a period of a year. They unify the trends found across different sources using edit distance clustering. They rank each trending topic (cluster) by assigning a global trend score as the sum of daily trend scores. They define lifetime of a trend as "the number of consecutive days with positive trend scores". Doing lifetime analysis of trends, they investigate the survival duration of trends, its variation across different media channels. They find trends to last typically less than 14 days. They observe that Twitter trends to be the shortest, and Wikipedia trends also to be short. They observe Google to cover a significant proportion of the major trends, and thus Google dominates the lifetime histogram of the topics that trend. They observe that certain categories of topics go well with certain channels. For instance, sports is the most popular on Google, while holidays, celebrities and entertainment are most popular on Twitter. Using historical time series data from multiple semantically similar topics, they forecast which of the emerging topics will trend. This comprises of three steps. First, they discover semantically similar topics. They use DBPedia BIB004 ] named entities and category information. They create a topic set that includes all discovered similar topics. To find similar topics, they define two topic sets: one including the trending topic, and another containing various general topics (to compare with the trending topic). Second, they do a nearest neighbor sequence matching, on timeseries of topics of interest, using "the viewing statistics of the two previous months, to all partial sequences of same length of similar topics in the set of topics". Third, they forecast the life cycle of trending topics. Their forecast draws from the best matching semantically similar topic. It uses the semantic similarity score to "scale to adjust to the nearest neighbor time series". [ BIB003 propose incorporating simple semantics into topic detection for documents, by grouping the terms based upon similar meanings. They associate the group with external ontology, and extract terms and entities into distinct sub-vectors to represent the document. Similarity of a given pair of documents are computed using sub-vector similarity. predict topics that would draw attention in future. They use moving average convergence divergence (MCAD), an indicator frequently used to study stock prices, to identify emerging topics, using a short-period and long-period trend momentum oscillator, and average of term frequency. They predict that a term will trend positively if a trend with a negative momentum changes to positive, and will trend negatively if a trend with a positive momentum changes to negative.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of context on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authorative” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristrics for link-based analysis. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Similarity breeds connection. This principle—the homophily principle—structures network ties of every type, including marriage, friendship, work, advice, support, information transfer, exchange, comembership, and other types of relationship. The result is that people's personal networks are homogeneous with regard to many sociodemographic, behavioral, and intrapersonal characteristics. Homophily limits people's social worlds in a way that has powerful implications for the information they receive, the attitudes they form, and the interactions they experience. Homophily in race and ethnicity creates the strongest divides in our personal environments, with age, religion, education, occupation, and gender following in roughly that order. Geographic propinquity, families, organizations, and isomorphic positions in social systems all create contexts in which homophilous relations form. Ties between nonsimilar individuals also dissolve at a higher rate, which sets the stage for the formation of niches (localize... <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Today, when searching for information on the WWW, one usually performs a query through a term-based search engine. These engines return, as the query's result, a list of Web pages whose contents matches the query. For broad-topic queries, such searches often result in a huge set of retrieved documents, many of which are irrelevant to the user. However, much information is contained in the link-structure of the WWW. Information such as which pages are linked to others can be used to augment search algorithms. In this context, Jon Kleinberg introduced the notion of two distinct types of Web pages: hubs and authorities . Kleinberg argued that hubs and authorities exhibit a mutually reinforcing relationship : a good hub will point to many authorities, and a good authority will be pointed at by many hubs. In light of this, he dervised an algoirthm aimed at finding authoritative pages. We present SALSA, a new stochastic approach for link-structure analysis, which examines random walks on graphs derived from the link-structure. We show that both SALSA and Kleinberg's Mutual Reinforcement approach employ the same metaalgorithm. We then prove that SALSA is quivalent to a weighted in degree analysis of the link-sturcutre of WWW subgraphs, making it computationally more efficient than the Mutual reinforcement approach. We compare that results of applying SALSA to the results derived through Kleinberg's approach. These comparisions reveal a topological Phenomenon called the TKC effect which, in certain cases, prevents the Mutual reinforcement approach from identifying meaningful authorities. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a three-level hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, in turn, modeled as an infinite mixture over an underlying set of topic probabilities. In the context of text modeling, the topic probabilities provide an explicit representation of a document. We present efficient approximate inference techniques based on variational methods and an EM algorithm for empirical Bayes parameter estimation. We report results in document modeling, text classification, and collaborative filtering, comparing to a mixture of unigrams model and the probabilistic LSI model. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Node characteristics and behaviors are often correlated with the structure of social networks over time. While evidence of this type of assortative mixing and temporal clustering of behaviors among linked nodes is used to support claims of peer influence and social contagion in networks, homophily may also explain such evidence. Here we develop a dynamic matched sample estimation framework to distinguish influence and homophily effects in dynamic networks, and we apply this framework to a global instant messaging network of 27.4 million users, using data on the day-by-day adoption of a mobile service application and users' longitudinal behavioral, demographic, and geographic data. We find that previous methods overestimate peer influence in product adoption decisions in this network by 300–700%, and that homophily explains >50% of the perceived behavioral contagion. These findings and methods are essential to both our understanding of the mechanisms that drive contagions in networks and our knowledge of how to propagate or combat them in domains as diverse as epidemiology, marketing, development economics, and public health. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> The growing demand for large-scale data mining and data analysis applications has led both industry and academia to design new types of highly scalable data-intensive computing platforms. MapReduce and Dryad are two popular platforms in which the dataflow takes the form of a directed acyclic graph of operators. These platforms lack built-in support for iterative programs, which arise naturally in many applications including data mining, web ranking, graph analysis, model fitting, and so on. This paper presents HaLoop, a modified version of the Hadoop MapReduce framework that is designed to serve these applications. HaLoop not only extends MapReduce with programming support for iterative applications, it also dramatically improves their efficiency by making the task scheduler loop-aware and by adding various caching mechanisms. We evaluated HaLoop on real queries and real datasets. Compared with Hadoop, on average, HaLoop reduces query runtimes by 1.85, and shuffles only 4% of the data between mappers and reducers. <s> BIB007 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> In recent years, research on measuring trajectory similarity has attracted a lot of attentions. Most of similarities are defined based on the geographic features of mobile users' trajectories. However, trajectories geographically close may not necessarily be similar because the activities implied by nearby landmarks they pass through may be different. In this paper, we argue that a better similarity measurement should have taken into account the semantics of trajectories. In this paper, we propose a novel approach for recommending potential friends based on users' semantic trajectories for location-based social networks. The core of our proposal is a novel trajectory similarity measurement, namely, Maximal Semantic Trajectory Pattern Similarity (MSTP-Similarity), which measures the semantic similarity between trajectories. Accordingly, we propose a user similarity measurement based on MSTP-Similarity of user trajectories and use it as the basis for recommending potential friends to a user. Through experimental evaluation, the proposed friend recommendation approach is shown to deliver excellent performance. <s> BIB008 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Twitter enjoys enormous popularity as a micro-blogging service largely due to its simplicity. On the downside, there is little organization to the Twitterverse and making sense of the stream of messages passing through the system has become a significant challenge for everyone involved. As a solution, Twitter users have adopted the convention of adding a hash at the beginning of a word to turn it into a hashtag. Hashtags have become the means in Twitter to create threads of conversation and to build communities around particular interests. ::: ::: In this paper, we take a first look at whether hashtags behave as strong identifiers, and thus whether they could serve as identifiers for the Semantic Web. We introduce some metrics that can help identify hashtags that show the desirable characteristics of strong identifiers. We look at the various ways in which hashtags are used, and show through evaluation that our metrics can be applied to detect hashtags that represent real world entities. <s> BIB009 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> We show that information about social relationships can be used to improve user-level sentiment analysis. The main motivation behind our approach is that users that are somehow "connected" may be more likely to hold similar opinions; therefore, relationship information can complement what we can extract about a user's viewpoints from their utterances. Employing Twitter as a source for our experimental data, and working within a semi-supervised framework, we propose models that are induced either from the Twitter follower/followee network or from the network in Twitter formed by users referring to each other using "@" mentions. Our transductive learning results reveal that incorporating social-network information can indeed lead to statistically significant sentiment-classification improvements over the performance of an approach based on Support Vector Machines having access only to textual features. <s> BIB010 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Social networks and the propagation of content within social networks have received an extensive attention during the past few years. Social network content propagation is believed to depend on the similarity of users as well as on the existence of friends in the social network. Our former investigation of the YouTube social network showed that strangers (non-friends and non-followers) play a more important role in content propagation than friends. In this paper, we analyze user communities within the YouTube social network and apply various similarity measures on them. We investigate the degree of similarity in communities versus the entire social network. We found that communities are formed from similar users. At the same time, we found that there are no large similarity values between friends in YouTube communities. <s> BIB011 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Topic Dynamics and Familiarity/Similarity Groups <s> Community detection in social networks is a well-studied problem. A community in social network is commonly defined as a group of people whose interactions within the group are more than outside the group. It is believed that people's behavior can be linked to the behavior of their social neighborhood. While shared characteristics of communities have been used to validate the communities found, to the best of authors' knowledge, it is not demonstrated in the literature that communities found using social interaction data are like-minded, i.e., they behave similarly in terms of their interest in items (e.g., movie, products). In this paper, we experimentally demonstrate, on a social networking movie rating dataset, that people who are interested in an item are socially better connected than the overall graph. Motivated by this fact, we propose a method for finding communities wherein like-mindedness is an explicit objective. We find small tight groups with many shared interests using a frequent item set mining approach and use these as building blocks for the core of these like-minded communities. We show that these communities have higher similarity in their interests compared to communities found using only the interaction information. We also compare our method against a baseline where the weight of edges are defined based on similarity in interests between nodes and show that our approach achieves far higher level of like-mindedness amongst the communities compared to this baseline as well. <s> BIB012
Bringing the aspects of familiarity and similarity together, finding the impact of one on the other, and correlating the two for information modeling, have drawn research interest. Research questions that require study of familiarity and similarity of users of online social networks have been asked, such as whether topics of interest are more similar among users with following relations that without, and whether recommending a user to make a social connection with another user based upon similarity is effective. In Twitter, homophily BIB003 implies that a "user follows a friend if she is interested in one or more topics posted by the friend, and the friend follows her back because she finds that they share similar topical interest(s)". Researchers have investigated homophily for information diffusion and community analysis. Investigates the presence and causes of reciprocity in Twitter followership network, and impact of this reciprocity. Shows that Twitter users with reciprocal followerships are topic-wise more similar, compared to those without. Shows that Twitter followerships are more interest-based than casual. Proposes SALSA, a user-recommendation stochastic algorithm for a user to follow other users, based upon user-expressed interest and the set of people followed. This lets their system recommend other users to a given user. Observes that users who are similar often follow one another, and users often follow other users that in turn follow similar other users. In one of the earliest works, attempt to bring social familiarity and similarity together in social network and microblog settings. They collect data for 996 top Twitter users from Singapore in terms of number of followers, as per twitterholic.com. They crawl the followers and friends (those being followed) of each of these users s ∈ S, and store them in the set S. They finalize their set of target users for the experiment as S ′ = S ∪(S). Thereby, they obtain S * = {s|s ∈ S ′ , and s is from Singapore }. In their data, |S * | = 6748. They represent the set of all tweets by all members of S * by T , where |T | = 1, 021, 039 for their dataset. They observe that, except for a few outliers, the number of tweets made the the users, the number of followers and the number of friends (those being followed), follow the power law distribution. They observe that the Twitter platform is rich in the reciprocity property: in spite of an edge (followership) being a oneway relationship, "72.4% of Twitter users follow back more than 80% of their followers, and 80.5% of the users have 80% of users they follow, following them back". To determine the presence of homophily on Twitter, they ask whether topics of interest are more similar among users with following (and reciprocal following) relationships compared to those without. To answer, they attempt to find topic interests of Twitter users, since topics are not explicitly specified on Twitter, and hashtags are not present in all messages. They collect all tweets made by a user, and create a user-level document, and repeat this for each user. They run LDA BIB005 ] for topic detection. In the LDA process, they create DT , a D × T matrix, where D and T respectively denote the count of users and topics. DT i j represents the "number of times a word in user s j 's tweets is assigned to topic t j ". They measure topical difference between a pair of users s i and s j as The JS Divergence D JS (i, j) between probability distributions DT ′ i and DT ′ j is calculated as Here "M is the average of the two probability distributions, and D KL is the KL divergence of the two". Using the notion of topical difference, they perform statistical hypothesis testing and find in answer to their question, that, users with following (and reciprocal following) are more similar in terms of topics of interest, than those without. They attempt to measure topic-sensitive influence of Twitter users, by proposing a PageRanklike BIB001 ] framework, and call it topic-specific TwitterRank. They consider the directed graph, where edges are directed from followers to friends (persons followed). They perform a topicspecific random walk, and construct a topic-specific relationship network among Twitter users. For a topic t, the random surfing transition probability, from follower s i to friend s j , is defined as Here s j has published |T | number of tweets, and ∑ a:s i follows s a |T a | is the total number of tweets published by all the friends of s i . The similarity between s i and s j in topic t can be found as jt | This definition captures two notions. (a) It assigns a higher transition probability to friends who publish content more frequently. (b) The influence is also based upon topical similarity of s i and s j , capturing the homophily phenomenon. They introduce measures to account for pairs of users that follow only each other and nobody else. For this, they use a teleportation vector E t , which captures the probability of a random walk jumping to some users rather than following the graph edges all the time. They calculate topic-specific TwitterRank − − → T R t of users, in topic t, iteratively as − − → T R t = γP t × − − → T R t + (1 − γ)E t , where P t is the transition probability and γ (0 ≤ γ ≤ 1) controls the teleportation probability. The TwitterRank vectors thus constructed are topic-specific They capture the influence of users for each topic, and aggregate to compute the overall influence of users, as − → Here topic t is given weight r t , and the corresponding − − → T R t . Weight assignments differ across different settings, to compute user influence under such settings. Their study reveals that the high reciprocity in Twitter can be explained by homophily. This empirically shows that Twitter followerships are more interest-based than casual. observe that on Twitter, a user tends to follow those who are followed by other similar users. Thus, the followers of a user tend to be similar to each other. They claim that user similarity is likely to lead to followership (familiarity). Motivated by this, they deploy a few user recommendation algorithms (a user recommended to another user for followership) in Twitter's live production system. One algorithm is based upon user's circle of trust, derived from an egocentric random walk similar to personalized PageRank BIB007 ] . The random walk parameters include the count of steps, reset probability (optionally discarding low-probability vertices), control parameters used to sample outgoing edges for high outdegree vertices etc. They dynamically adjust the random walk and personalization parameters for specific applications. They deploy another algorithm based upon SALSA (Stochastic Approach for Link-Structure Analysis) BIB004 , a random walk algorithm like PageRank BIB001 ] and HITS BIB002 . SALSA is applied on a hub-authority bipartite graph such that it traverses a pair of links at each step, one forward and one backward link. This ensures that the random walk ends up on the same side of the bipartite graph every time. For each user, the hub comprises of a set of users that a given user trusts, and the authority comprises of a set of uses that hubs follow. They run SALSA for multiple iterations and assigns scores to both the sides of the bipartite graph. On one side of the bipartite graph, they obtain a interested in kind of rank of the vertices. On the other side, they obtain user similarity measures. This lets their system recommend other users to a given user, using a rank of similarity of users that are thus reached in the random walk process, where the ranks are computed based upon expressed interest, and the set of people followed. They evaluate on Twitter using offline experiments on retrospective data, as well as A/B split testing on live data, and find SALSA the most effective among the different follower recommendation algorithms for Twitter. Among other studies that involve social familiarity and similarity, BIB008 ] model social network user similarity using trajectory mining. BIB011 analyze YouTube social network user communities and apply several measures of similarity on the communities. Some of the similarity computation methods they apply include Jaccard ] and Dice ] similarity co-efficient, Sokal and Sneath similarity measure , Russel and Rao similarity measure , Roger and Tanimoto similarity measure and L 1 and L 2 norms . They observe that communities are formed from similar users on Youtube; however, they do not find the friends in YouTube communities to be largely similar. BIB012 attempt to find like-minded communities on a movie review platform that also has a social network friendship platform inbuilt. They define like-mindedness as a measure to capture the compatible interest levels among community members, as cosine similarity of ratings the members assign to different movies. They find communities with the objective being likemindedness. Using frequent itemset mining, they find tight small groups with multiple shared interests, that act as core building blocks of like-minded communities. Comparing with communities discovered using only interaction information, they show these communities to have higher similarity of interests. [ BIB006 ] attempt to distinguish between influence-based diffusion and homophily-driven contagion in product-adoption decisions, on dynamic networks. They investigate the diffusion of a mobile service product for 5 months after launch, on the Yahoo instant messenger (IM) network, a social network that comprised of 27.4 million users at the time of experimentation. They use a dynamic match framework for sample estimation that they develop to differentiate influence and homophily effects in a dynamic network setting. Their findings indicate that "homophily explains more than 50% of perceived behavioral contagion". While this study is not a direct investigation of impact of familiarity on similarity or vice-versa, it is one of the early works on social networks that show the significance of similarity (homophily) on a social network, and contrast this with the impact of peer influence. ] consider similarity and social familiarity together, investigating the impact of homophily on information diffusion, as outlined in Section 3. Many research works exist that address similarity and familiarity independently. Different kinds of similarities between users have been studied on social networks and microblogs, like Facebook and Twitter. Early studies attempted to measure tag-based similarity of users. For instance, BIB009 measure user similarity based upon Twitter hashtags. Topic-based similarity of users refines the notion of tag-based similarity of microblog users. propose to train topic models using two different methodologies: LDA BIB005 ] and authortopic model . They subsequently infer topic mixture θ both for corpus and messages. They "classify users and associated messages into topical categories", to empirically demonstrate their system on Twitter. They use JS divergence to measure similarity between topics. Based upon this, they classify users into topical categories, which in turn can act as a foundation for measuring similarities of user pairs. In a study focusing on Twitter user sentiments (opinions), BIB010 empirically show that, under the hypothesis that connected (familiar) persons will have similar opinions, relationship information can complement what one can extract about a persons's viewpoints from their explicit utterances. This in turn can be used to improve user-level sentiment analysis.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> Large volumes of spatio-temporal-thematic data being created using sites like Twitter and Jaiku, can potentially be combined to detect events, and understand various 'situations' as they are evolving at different spatio-temporal granularity across the world. Taking inspiration from traditional image pixels which represent aggregation of photon energies at a location, we consider aggregation of user interest levels at different geo-locations as social pixels. Combining such pixels spatio-temporally allows for creation of social images and video. Here, we describe how the use of relevant (media processing inspired) situation detection operators upon such 'images', and domain based rules can be used to decide relevant control actions. The ideas are showcased using a Swine flu monitoring application which uses Twitter data. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> Recently, microblogging sites such as Twitter have garnered a great deal of attention as an advanced form of location-aware social network services, whereby individuals can easily and instantly share their most recent updates from any place. In this study, we aim to develop a geo-social event detection system by monitoring crowd behaviors indirectly via Twitter. In particular, we attempt to find out the occurrence of local events such as local festivals; a considerable number of Twitter users probably write many posts about these events. To detect such unusual geo-social events, we depend on geographical regularities deduced from the usual behavior patterns of crowds with geo-tagged microblogs. By comparing these regularities with the estimated ones, we decide whether there are any unusual events happening in the monitored geographical area. Finally, we describe the experimental results to evaluate the proposed unusuality detection method on the basis of geographical regularities obtained from a large number of geo-tagged tweets around Japan via Twitter. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 4000 topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of all the tweets posted by these users between June 2009 and August 2009 (approximately 200 million tweets), we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> Reducing the impact of seasonal influenza epidemics and other pandemics such as the H1N1 is of paramount importance for public health authorities. Studies have shown that effective interventions can be taken to contain the epidemics if early detection can be made. Traditional approach employed by the Centers for Disease Control and Prevention (CDC) includes collecting influenza-like illness (ILI) activity data from “sentinel” medical practices. Typically there is a 1–2 week delay between the time a patient is diagnosed and the moment that data point becomes available in aggregate ILI reports. In this paper we present the Social Network Enabled Flu Trends (SNEFT) framework, which monitors messages posted on Twitter with a mention of flu indicators to track and predict the emergence and spread of an influenza epidemic in a population. Based on the data collected during 2009 and 2010, we find that the volume of flu related tweets is highly correlated with the number of ILI cases reported by CDC. We further devise auto-regression models to predict the ILI activity level in a population. The models predict data collected and published by CDC, as the percentage of visits to “sentinel” physicians attributable to ILI in successively weeks. We test models with previous CDC data, with and without measures of Twitter data, showing that Twitter data can substantially improve the models prediction accuracy. Therefore, Twitter data provides real-time assessment of ILI activity. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> We present a large-scale study of user behavior in Foursquare, conducted on a dataset of about 700 thousand users that spans a period of more than 100 days. We analyze user checkin dynamics, demonstrating how it reveals meaningful spatio-temporal patterns and offers the opportunity to study both user mobility and urban spaces. Our aim is to inform on how scientific researchers could utilise data generated in Location-based Social Networks to attain a deeper understanding of human mobility and how developers may take advantage of such systems to enhance applications such as recommender systems. <s> BIB005 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> Studying relationships between keyword tags on social sharing websites has become a popular topic of research, both to improve tag suggestion systems and to discover connections between the concepts that the tags represent. Existing approaches have largely relied on tag co-occurrences. In this paper, we show how to find connections between tags by comparing their distributions over time and space, discovering tags with similar geographic and temporal patterns of use. Geo-spatial, temporal and geo-temporal distributions of tags are extracted and represented as vectors which can then be compared and clustered. Using a dataset of tens of millions of geo-tagged Flickr photos, we show that we can cluster Flickr photo tags based on their geographic and temporal patterns, and we evaluate the results both qualitatively and quantitatively using a panel of human judges. We also develop visualizations of temporal and geographic tag distributions, and show that they help humans recognize semantic relationships between tags. This approach to finding and visualizing similar tags is potentially useful for exploring any data having geographic and temporal annotations. <s> BIB006 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Geo-Spatial Topical Communities and Their Evolution <s> We present the first comprehensive characterization of the diffusion of ideas on Twitter, studying more than 5.96 million topics that include both popular and less popular topics. On a data set containing approximately 10 million users and a comprehensive scraping of 196 million tweets, we perform a rigorous temporal and spatial analysis, investigating the time-evolving properties of the subgraphs formed by the users discussing each topic. We focus on two different notions of the spatial: the network topology formed by follower-following links on Twitter, and the geospatial location of the users. We investigate the effect of initiators on the popularity of topics and find that users with a high number of followers have a strong impact on topic popularity. We deduce that topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network. Our geospatial analysis shows that highly popular topics are those that cross regional boundaries aggressively. <s> BIB007
Different topics on social networks receive different levels of visibility and traction at different geolocations. Further, the span of these topics, from inception of a topic to the topic passing through its lifecycle, vary across geographies, depending upon the nature and the locality of the events. Spatiotemporal analysis of microblog topics and modeling topical information diffusion in spatio-temporal settings are active research areas. Several works have attempted to analyze spatio-temporal aspects of social media and microblogs, mostly Twitter, with different angles of application. [ BIB003 BIB007 ] conduct some of the pioneering studies to characterize the spatio-temporal characteristics of diffusion of ideas on Twitter. On the subgraphs that form out of users discussing each given topic, they study two time-evolving properties: network topology of followership and geo-spatial location of users. They use Twitter data collected between June and August 2009, spanning over 10 millions users and 196 millions tweets. They infer geo-locations from GPS data and user-specified data on Twitter in form of latitude-longitude pairs, using Yahoo! Placefinder service API to resolve in terms of city, state and country. They take the hashtags as topics. Since only 10% of the tweets have a hashtag in their dataset, they also augment the set of topics by tagging tweets with entities, topics, places and other such tags, extracted using a text analytics engine (OpenCalais), and allowing a tweet to have multiple tags. They use the term event for major or minor happenings causing surge in tweeting activity of a given topic. In their model, they partition events into five divisions: pre-event phase when a topic gets initiated in the social network, growth phase when the topic is discussed by early adopters, peak phase when the topic is discussed by an early majority of individuals, decaying phase when the topic is discussed by a late majority of individuals and post-event phase the topic is discussed by laggards. They experiment with three event categories they created to perform the characterization: "popular events having 10, 000+ tweets, medium-popular events having between 500 and 10, 000 tweets and non-popular events having between 100 and 500 tweets". For each topic, they construct a subgraph (lifetime graph) of individuals who, at any time in the window, have tweeted at least once Characterizes spatio-temporal characteristics of diffusion of ideas on Twitter. Investigates network topology of followership and geo-spatial location of users on user graphs discussing a given topic. Shows that topics become popular if the follower count of the topic initiator is high and the topic is received by users having just a few followers. Shows that popular topics cross geographical boundaries, and disjoint clusters of popular topics merge to form a giant component. Identifies and characterizes topical discussions at various geographical granularities. Assigns users and tweets to locations, and creates temporal and geographical relationships across event message clusters, thereby identifying discussions. Observes geographical localization of temporal evolution of topical discussions on Twitter. Finds discussions to "evolve more at city levels compared to country levels, and more at country levels compared to globally". Analyzes spatio-temporal dynamics of user activity on Twitter. Creates a two-pass process: a content and temporal analysis module to handle micro-blog message streams and categorize them into topics, and a spatial analysis module to assign locations to the messages on the world map. Observes that the distribution of users who discussed a given event becomes global once a news media broadcasted a given news, expanding the geographical span of the locations associated with the event. Recognizes an event as a local one if it has a distribution of a high-density, and global otherwise. on the topic. They investigate a cumulative evolving graph for a topic, that captures the cumulative action of a user tweeting on the topic on at least one given day. They also study an evolving graph for a topic, which captures the action of a user tweeting on the topic on a given day. Analyzing the above graphs, they observe that popular topics aggressively cross regional boundaries, but unpopular topics do not. They hypothesize that popularity and geographical spread of topics are correlated. They count the number of regions with at least one individual mentioning a topic and plot it against the topic's popularity. The plot indicates that popular topics typically touch a higher number of regions compared to the less popular ones. In order to prove their hypothesis, in the cumulative evolving graphs, they compute the proportion of edges (u → v) for each topic, such that u and v belong to two different geographical regions. They observe that the fraction of edges that cross boundaries of geographies throughout their evolution, is high for the popular events ranging from 0.74 to 0.81 in their experiments. This fraction is observed to be low in case of medium-popular events, and very low in case of non-popular events. In summary, this part of their analysis shows that, the more popular a topic is on Twitter, the higher will be the fraction of edges crossing geographical boundaries, across all temporal phases on the event in its lifecycle of existence. Analyzing 4, 000 popular and less-popular topics, they show that, a large, connected subgraph tends to be formed by most users, discussing some popular topic on a given day. However, discussions on less popular topics tend to be restricted to disconnected clusters. They infer that "topics become popular when disjoint clusters of users discussing them begin to merge and form one giant component that grows to cover a significant fraction of the network". They find the popularity of a given topic to be high, where the number of followers of the topic initiator is high. ] conduct a geo-spatial analysis of topical discussions on unstructured microblogs, empirically demonstrating on Twitter. They identify and characterize topical discussion threads on Twitter, at different geographical granularities, specifically countries and cities. They cluster the tweets based upon topics, and draw the notions of extended (contextual) semantic and temporal relationships, from . They create geographical relationships across pairs of clusters based upon the geo-location that the constituent tweets and users belong to. In order to compute geographical relationships, they assign users and tweets to locations with certain probabilities, based upon the users profiles and tweet origins. They propose two definitions of belongingness of a cluster to a geographical region: one based upon the geographical distribution of users whose messages are included in the cluster, and the other based upon the geographical distribution of origination of the tweets that constitute the cluster. They extract geographical relationships at two granularities: cities and countries. Given location L i and event cluster E i , L i ∈ E i iff at least one microblog post M i is made from a location in L i , such that M i ∈ E i . This allows a location to be a part of multiple clusters at the same time. Each event cluster thus gets a vector of locations For each location associated with a cluster, they compute a belongingness value of the cluster to the location. This gives a belongingness value vector. They quantify the geographical relationship strength for each cluster pair, by associating geographies with the each of the belongingness value vectors. To compute belongingness, they augment the L i vector to aL i vector, where each elementL lish relationships across messages using a neighborhood generation algorithm, and use DBScan for text stream clustering. They thus continuously group messages into topics. The cluster shapes keep changing over time. They analyze the clusters to determine the hot topics from the posts. In the second pass, the spatial analysis, they assign locations to the messages on the world map, using the spatial locality characteristics of messages. Spatial locality of messages describes the high concentration of a set of messages in a specific geo-location. They record the distribution of location of topics at a given point of time using a location feature vector. They observe that the distribution of the population that discussed a given event would become global once a news media broadcasts a given news, expanding the geographical span of the location feature vector associated with the detected event. They formulate the probability of topic topic t belonging to location loc j , as In other words, they derive the probability of topic t belonging to location loc j as the ratio of the message count containing loc j in topic i (occur i, j ) to the total message count N t . Topics discussed widely across many locations are penalized with a penalty factor 1/(|loc j ∈ topic i |). They determine a candidate location by the maximum for probability of topic i as: candiLoc(topic i ) = argmax loc j {p(L = loc j |topic i )}. They compute whether a topic would be recognized as local or global, as The sparsity level and the concentricity of a given topic are traded off using a cut-off point θ . The authors note that a topic remains local if the likelihood of a candidate location crosses the threshold point. Thus, they recognize an event with high distribution density as local, and otherwise as global. They experimentally demonstrate the effectiveness of their method, over 52, 195, 773 Twitter messages collected between January 6 th 2011 and March 11 th 2011. In a study that demonstrates the real-life effectiveness on pandemic disease data that authorities use for disease control, BIB004 use Twitter data to collect related hashtag-based data pertaining to influenza-like diseases. Using user's known position (such as from 3G phone) and profile location and periodically collected data, form a spatio-temporal influenza database. Their experiments show a high (0.9846) correlation coefficient with ground-truth illness data reported to the authorities. They use this platform to develop a regression model, that effectively improves predicting influenza cases. BIB001 ] analyzes a combination of geo-spatial and temporal interest patterns on Twitter, for situation detection and control applications, from text, image and video data. They demonstrate the effectiveness of their system on a Swine Flu monitoring application. BIB006 observe the presence of meaningful temporal, geo-spatial and geotemporal tag clusters on Flickr dataset. To enable easy recognition of semantic relationships across tags by humans, they provide a visualization system for geographical and temporal tag distributions. BIB005 analyze check-in behavior and inter-checkin distances of users to several geolocations, using spatio-temporal patterns in user mobility. They also analyze activity transactions: find a likely next activity given a current activity at a location. BIB002 detect unusual geo-social events from Twitter, using geo-tagged tweets and geographical regularities from the usual crowd behavior patterns, and finding deviations from these patterns at the time under consideration.
Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> Twitter is a user-generated content system that allows its users to share short text messages, called tweets, for a variety of purposes, including daily conversations, URLs sharing and information news. Considering its world-wide distributed network of users of any age and social condition, it represents a low level news flashes portal that, in its impressive short response time, has the principal advantage. In this paper we recognize this primary role of Twitter and we propose a novel topic detection technique that permits to retrieve in real-time the most emergent topics expressed by the community. First, we extract the contents (set of terms) of the tweets and model the term life cycle according to a novel aging theory intended to mine the emerging ones. A term can be defined as emerging if it frequently occurs in the specified time interval and it was relatively rare in the past. Moreover, considering that the importance of a content also depends on its source, we analyze the social relationships in the network with the well-known Page Rank algorithm in order to determine the authority of the users. Finally, we leverage a navigable topic graph which connects the emerging terms with other semantically related keywords, allowing the detection of the emerging topics, under user-specified time constraints. We provide different case studies which show the validity of the proposed approach. <s> BIB001 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> We present TwitterMonitor, a system that performs trend detection over the Twitter stream. The system identifies emerging topics (i.e. 'trends') on Twitter in real time and provides meaningful analytics that synthesize an accurate description of each topic. Users interact with the system by ordering the identified trends using different criteria and submitting their own description for each trend. We discuss the motivation for trend detection over social media streams and the challenges that lie therein. We then describe our approach to trend detection, as well as the architecture of TwitterMonitor. Finally, we lay out our demonstration scenario. <s> BIB002 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> Hashtags are used in Twitter to classify messages, propagate ideas and also to promote specific topics and people. In this paper, we present a linguistic-inspired study of how these tags are created, used and disseminated by the members of information networks. We study the propagation of hashtags in Twitter grounded on models for the analysis of the spread of linguistic innovations in speech communities, that is, in groups of people whose members linguistically influence each other. Differently from traditional linguistic studies, though, we consider the evolution of terms in a live and rapidly evolving stream of content, which can be analyzed in its entirety. In our experimental results, using a large collection crawled from Twitter, we were able to identify some interesting aspects -- similar to those found in studies of (offline) speech -- that led us to believe that hashtags may effectively serve as models for characterizing the propagation of linguistic forms, including: (1) the existence of a "preferential attachment process", that makes the few most common terms ever more popular, and (2) the relationship between the length of a tag and its frequency of use. The understanding of formation patterns of successful hashtags in Twitter can be useful to increase the effectiveness of real-time streaming search algorithms. <s> BIB003 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> We present a novel topic modelling-based methodology to track emerging events in microblogs such as Twitter. Our topic model has an in-built update mechanism based on time slices and implements a dynamic vocabulary. We first show that the method is robust in detecting events using a range of datasets with injected novel events, and then demonstrate its application in identifying trending topics in Twitter. <s> BIB004 </s> Literature Survey on Interplay of Topics, Information Diffusion and Connections on Social Networks <s> Detection and Spread of Topics <s> Social networks play a fundamental role in the diffusion of information. However, there are two different ways of how information reaches a person in a network. Information reaches us through connections in our social networks, as well as through the influence external out-of-network sources, like the mainstream media. While most present models of information adoption in networks assume information only passes from a node to node via the edges of the underlying network, the recent availability of massive online social media data allows us to study this process in more detail. We present a model in which information can reach a node via the links of the social network or through the influence of external sources. We then develop an efficient model parameter fitting technique and apply the model to the emergence of URL mentions in the Twitter network. Using a complete one month trace of Twitter we study how information reaches the nodes of the network. We quantify the external influences over time and describe how these influences affect the information adoption. We discover that the information tends to "jump" across the network, which can only be explained as an effect of an unobservable external influence on the network. We find that only about 71% of the information volume in Twitter can be attributed to network diffusion, and the remaining 29% is due to external events and factors outside the network. <s> BIB005
Topics are identified using: (a) hashtags of microblogs like Twitter (ex: BIB003 ), (b) bursty keyword identification (ex: BIB001 and BIB002 ), and (c) probability distributions of latent concepts over keywords in user generated content (ex: BIB004 ). Bursty topics are often treated as trending topics for modeling and analysis. The shortcomings in topic detection related literature appear to be the following. Consideration of social influence: Literature exploring the impact of influence on emergence of topics leaves many questions open. A better understanding is needed on, whether users having general and topic-specific influence create long-lasting topics and high information outreach. How do structures such as communities emerge from social connections? What is the role of influences around topics there? Do topics created by different influencers tend to spread together or compete with each other? What is the social relationship of influencers in such setting? Managing topic complexity along with scale of detection: Hashtags and bursty keywords, two of the popular methods to identify topics/trends, often represent simple single-word concepts. These are often not disambiguated, leading to information loss. For instance, #IITDelhi and #IITDelhiIndia are conceptually the same "topics" (or trends), and yet mostly treated as different topics in literature. No work unifies such concepts automatically ( BIB003 ] unifies manually). Algorithms to detect topics as probability distributions over n-gram concept sets do not scale enough to cover a large enough fraction of social network messages fast. Identifying complex topics fast and at scale, while representing without information loss, needs research focus. Information-rich multimedia data analysis: There is space to improve the state of the art of topic detection, by considering not just text but also other kinds of inputs such as images and videos, for detection topics of interest and thereby conducting analyses. One could also consider the commonalities of the types of resources shared, such as objects that the URLs shared by the users point to, in order for topic detection. The existing literature has not explored this. Consideration of state of the social network: Topics may not necessarily emerge from external events. Topics might get created because of the state that a given social network already is in. This is not yet explored in the literature. In such settings, the state of the social network can be determined by the prior set of topics, ongoing discussions, set of participants, their social relationships and other relevant attributes, and be filtered via aspects such as geographies and communities. Defining Discussions: The literature mostly assumes that a microblog discussion is nothing but a topic (such as a Twitter hashtag) being mentioned by members of a social network, without attempting to define discussions and validate any such definition. Some research works, such as ], attempt to define discussions using message clustering and temporal filters. However, attention is clearly required to better define discussions, and justify such definitions. The closed-world assumption: Literature usually treats topic lifecycle and information diffusion as incidents internal to given social networks, as a closed world. However, a preliminary study by BIB005 , shows significant impact of external information sources, on information diffusion. This necessitates conducting a deeper study of external impact on information diffusion, and exploring the validity of the closed world assumption that most of the literature assumes.
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> It is shown how globally stable model reference adaptive control systems may be designed when one has access to only the plant's input and output signals. Controllers for single input-single output, nonlinear, nonautonomous plants are developed based on Lyapunov's direct method and the Meyer-Kalman-Yacubovich lemma. Derivatives of the plant output are not required, but are replaced by filtered derivative signals. An augmented error signal replaces the error normally used, which is defined as the difference between the model and plant outputs. However, global stability is assured in the sense that the normally used error signal approaches zero asymptotically. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> The paper considers the control of an unknown linear time-invariant plant using Direct and Indirect Model Reference Adaptive Control. Employing a specific controller structure and the concept of positive realness, adaptive laws are derived using Indirect Control which are identical to those obtained in the case of Direct Control. The stability questions that arise are also shown to be the same. Simulation results using the new scheme are presented for the control of both stable and unstable plants. <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> This paper establishes global convergence for a class of adaptive control algorithms applied to discrete time multi-input multi-output deterministic linear systems. It is shown that the algorithms will ensure that the system inputs and outputs remain bounded for all time and that the output tracking error converges to zero. <s> BIB003 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> Progress in theory and applications of adaptive control is reviewed. Different approaches are discussed with particular emphasis on model reference adaptive systems and self-tuning regulators. Techniques for analysing adaptive systems are discussed. This includes stability and convergence analysis. It is shown that adaptive control laws can also be obtained from stochastic control theory. Issues of importance for applications are covered. This includes parameterization, tuning, and tracking, as well as different ways of using adaptive control. An overview of applications is given. This includes feasibility studies as well as products based on adaptive techniques. <s> BIB004 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> 1. Introduction.- 2. Continuous-time identifiers and adaptive observers.- 3. Discrete-time identifiers.- 4. Robustness improvement of identifiers and adaptive observers.- 5. Adaptive control in the presence of disturbances.- 6. Reduced-order adaptive control.- 7. Decentralized adaptive control.- 8. Reduced order-decentralized adaptive control.- Corrections. <s> BIB005 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> This unified survey focuses on linear discrete-time systems and explores the natural extensions to nonlinear systems. In keeping with the importance of computers to practical applications, the authors emphasize discrete-time systems. Their approach summarizes the theoretical and practical aspects of a large class of adaptive algorithms.1984 edition. <s> BIB006 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> An algorithm is proposed for self-tuning optimal fixed-lag smoothing or filtering for linear discrete-time multivariable processes. A z -transfer function solution to the discrete multivariable estimation problem is first presented. This solution involves spectral factorization of polynomial matrices and assumes knowledge of the process parameters and the noise statistics. The assumption is then made that the signal-generating process and noise statistics are unknown. The problem is reformulated so that the model is in an innovations signal form, and implicit self-tuning estimation algorithms are proposed. The parameters of the innovation model of the process can be estimated using an extended Kalman filter or, alternatively, extended recursive least squares. These estimated parameters are used directly in the calculation of the predicted, smoothed, or filtered estimates. The approach is an attempt to generalize the work of Hagander and Wittenmark. <s> BIB007 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> We propose a new model reference adaptive control algorithm and show that it provides the robust stability of the resulting closed-loop adaptive control system with respect to unmodeled plant uncertainties. The robustness is achieved by using a relative error signal in combination with a dead zone and a projection in the adaptive law. The extra a priori information needed to design the adaptive law, are bounds on the plant parameters and an exponential bound on the impulse response of the inverse plant transfer function. <s> BIB008 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> Stability theory simple adaptive systems adaptive observers the control problem persistent excitation error models robust adaptive control the control problem - relaxation of assumptions multivariable adaptive systems applications of adaptive control. <s> BIB009 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> MODEL REFERENCE ADAPTIVE CONTROL <s> 1. Introduction. Control System Design Steps. Adaptive Control. A Brief History. 2. Models for Dynamic Systems. Introduction. State-Space Models. Input/Output Models. Plant Parametric Models. Problems. 3. Stability. Introduction. Preliminaries. Input/Output Stability. Lyapunov Stability. Positive Real Functions and Stability. Stability of LTI Feedback System. Problems. 4. On-Line Parameter Estimation. Introduction. Simple Examples. Adaptive Laws with Normalization. Adaptive Laws with Projection. Bilinear Parametric Model. Hybrid Adaptive Laws. Summary of Adaptive Laws. Parameter Convergence Proofs. Problems. 5. Parameter Identifiers and Adaptive Observers. Introduction. Parameter Identifiers. Adaptive Observers. Adaptive Observer with Auxiliary Input. Adaptive Observers for Nonminimal Plant Models. Parameter Convergence Proofs. Problems. 6. Model Reference Adaptive Control. Introduction. Simple Direct MRAC Schemes. MRC for SISO Plants. Direct MRAC with Unnormalized Adaptive Laws. Direct MRAC with Normalized Adaptive Laws. Indirect MRAC. Relaxation of Assumptions in MRAC. Stability Proofs in MRAC Schemes. Problems. 7. Adaptive Pole Placement Control. Introduction. Simple APPC Schemes. PPC: Known Plant Parameters. Indirect APPC Schemes. Hybrid APPC Schemes. Stabilizability Issues and Modified APPC. Stability Proofs. Problems. 8. Robust Adaptive Laws. Introduction. Plant Uncertainties and Robust Control. Instability Phenomena in Adaptive Systems. Modifications for Robustness: Simple Examples. Robust Adaptive Laws. Summary of Robust Adaptive Laws. Problems. 9. Robust Adaptive Control Schemes. Introduction. Robust Identifiers and Adaptive Observers. Robust MRAC. Performance Improvement of MRAC. Robust APPC Schemes. Adaptive Control of LTV Plants. Adaptive Control for Multivariable Plants. Stability Proofs of Robust MRAC Schemes. Stability Proofs of Robust APPC Schemes. Problems. Appendices. Swapping Lemmas. Optimization Techniques. Bibliography. Index. License Agreement and Limited Warranty. <s> BIB010
First attempts at using adaptive control techniques were developed during the sixties and were based on intuitive and even ingenious ideas , ), yet they ended in failure, mainly because at the time there was not very much knowledge of stability analysis with nonstationary parameters. Modern methods of stability analysis that had been developed by Lyapunov at the start of the 19th century were not broadly known, much less used, in the West . After the initial problems with adaptive control techniques of the sixties, stability analysis has become a center point in new developments related to adaptive control. Participation of some of the leading researchers in the control community at the time, such as Narendra, Landau,Å ström, Kokotovic, Goodwin, Morse, Grimble and many others, added a remarkable contribution to the better modeling and to the understanding of adaptive control methodologies BIB001 , (vanAmerongen and TenCate, 1975) , , , , , , BIB002 , , BIB009 , BIB003 , BIB006 , BIB004 , (Astrom and Wittenmark, 1989) , BIB005 , BIB007 , (Mareels, 1984) , BIB008 , , , BIB010 , (Bitmead, Gevers and Wertz, 1990) , , . New tools and techniques have been developed and used and they finally led to successful proofs of stability, mainly based on the Lyapunov stability approach. The standard methodology was the Model Reference Adaptive Control approach which, as its name states, basically requires the possibly "bad" plant to follow the behavior of a "good" Model Reference. y m (t) = C m x m (t) The control signal that feeds the plant is a linear combination of the Model state variables If the plant parameters were fully known, one could compute the corresponding controller gains that would force the plant to asymptotically follow the Model, or and correspondingly Because the entire plant state ultimately behaves exactly as the model state, MRAC is sometimes interpreted as Pole-Zero placing. However, in this report we only relate to MRAC in relation to its main aim, namely, the plant output should follow the desired behavior represented by the model output. When the plant parameters are not (entirely) known, one is naturally lead to use adaptive control gains. The basic idea is that the plant is fed a control signal that is a linear combination of the model state through some gains. If all gains are correct, the entire plant state vector The resulting "tracking error" can be monitored and used to generate adaptive gains. The basic idea of the adaptation is like that: assume that one component of the control signal that is fed to the plant is coming from the variable x mi through the gain k xi . If the gain is not perfectly correct, this component contributes to the tracking error and therefore the tracking error and the component x mi are correlated. This correlation is used to generate the adaptive gaiṅ where γ i is a parameter that affects the rate of adaptation. The adaptation should continues until the correlation diminishes and ultimately vanishes and therefore the gain derivative tends to zero and the gain itself is (hopefully) supposed to ultimately reach a constant value. In vectorial form, As Figure 1 below shows, there are various other components that can be added to improve the performance of the MRAC system such aṡ so the total control signal is Many other elements, such as adaptive observers, etc., can be added to this basic MRAC scheme and can be found in the reference cited above, yet here we want to pursue just the basic Model Reference idea. This approach was able to generate some rigorous proofs of stability that showed that not only the tracking error but even the entire state error asymptotically vanishes. This result implied that the plant behavior would asymptotically reproduce the stable model behavior and would ultimately achieve the desired performance represented by the ideal Model Reference. In particular, the Lyapunov stability technique revealed the prior conditions that had to be satisfied in order to guarantee stability and allowed getting rigorous proofs of stability of the adaptive control system. Because along with the dynamics of the state or the state error, adaptive control systems have also introduced the adaptive gains dynamics, the positive definite quadratic Lyapunov function had to contain both the errors and the adaptive gains and usually had the form Here, K is a set of the ideal gains that could perform perfect model following if the parameters were known, and that the adaptive control gains were supposed to asymptotically reach. Yet, in spite of successful proofs of stability, very little use has been made of adaptive control techniques in practice. Therefore, we will first discuss some of the problems that are inherent to the classical MRAC approach and that are emphasized when one intends to use adaptive methods with such applications as large flexible space structures and similar large scale systems. First, the fact that the entire plant state vector is supposed to follow the behavior of the model state vector immediately implies that the model is basically supposed to be of the same order as the plant. If this is not the case, various problems have been shown to appear, including total instability. As real world plants are usually of very high order when compared with the nominal plant model, a so-called "unmodeled dynamics" must inherently be considered in the context of this approach. The developers of adaptive control techniques were able to show that the adaptive system still demonstrates stability robustness in spite of the "unmodeled dynamics," yet to this end they required that the "unmodeled dynamics" be "sufficiently small." Furthermore, if any state variable of the Model reference is zero, the corresponding adaptive gain is also zero. Also, if the model reaches a steady state, some of the various adaptive gains loose their independence, and this point raises the need for some "persistent excitation" or "sufficient excitation." It should be emphasized that the need for sufficiently large Models, sufficiently small "unmodeled dynamics" and "sufficient excitation" appear even if one only intends to guarantee the mere stability of the plant, before even mentioning performance. Finally, when all these basic conditions are satisfied, the stability of the adaptive control could initially be proved only if the original plant was Strictly Passive (SP), which in LTI systems implies that its input-output transfer function is Strictly Positive Real (SPR). Passivity-like conditions appear in various forms in different presentations, so they deserve a special section.
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Definition 1. <s> Frequency domain conditions for strictly positive real (SPR) functions which appear in literature are often only necessary or only sufficient. This point is raised in [1], [2], where necessary and sufficient conditions in the s -domain are given for a transfer function to be SPR. In this note, the points raised in [1], I2] are clarified further by giving necessary and sufficient conditions in the frequency domain for transfer functions to be SPR. These frequency-domain conditions are easier to test than those given in the s -domain or time domain [1], [2]. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Definition 1. <s> ABSTRACT Simple adaptive control systems were recently shown to be globally stable and to maintain robustness with disturbances if the controlled system is “almost strictly positive real” namely, if there exists a constant output feedback (unknown and not needed for implementation) such that the resulting closed loop transfer function is strictly positive real. In this paper it is shown how to use parallel feedforward and the stabi 1izability properties of systems in order to satisfy the “almost positivity” condition. The feedforward configuration may be constant, if some prior knowledge is given, or adaptive, in general. This way, simple adaptive controllers can be implemented in a large number of complex control systems, without requiring the order of the plant or the pole-excess as prior knowledge. <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Definition 1. <s> The concepts of G-passivity and G-passifiability (feedback G-passivity) are introduced extending the concepts of passivity and passifiability to nonsquare systems (systems with different numbers of inputs and outputs). Necessary and sufficient conditions for strict G-passifiability of nonsquare linear systems by output feedback are given. Simple description of a broad subclass of passifying feedbacks is proposed. The proofs are based on a version of the celebrated Yakubovich-Kalman-Popov lemma. <s> BIB003 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> Definition 1. <s> A recent publication states and proves the conditions under which a linear time-invariant system, with state-space realization A,B,C, can be made strictly positive real via constant output feedback. This note is intended to briefly present the development of the proof and to give due credit to the first proofs of this statement. <s> BIB004
A linear time-invariant system with a state-space realization {A, B, C}, where A ∈ R n * n , B ∈ R n * m , C ∈ R m * n , with the m*m transfer function T (s) = C(sI − A) −1 B, is called "strictly passive (SP)" and its transfer function "strictly positive real (SPR)" if there exist two positive definite symmetric (PDS) matrices, P and Q, such that the following two relations are simultaneously satisfied: The relation between the strict passivity conditions (16)-(17) and the strict positive realness of the corresponding transfer function has been treated elsewhere BIB001 , . Relation (16) is the common algebraic Lyapunov equation and shows that an SPR system is asymptotically stable. One can also show that conditions (16)- (17) also imply that the system is strictly minimum-phase, yet simultaneous satisfaction of both conditions (16)- (17) is far from being guaranteed even in stable and minimum-phase systems, and therefore the SPR condition seemed much too demanding. (Indeed, some colleagues in the general control community use to ask: if the system is already asymptotically stable and minimum-phase, why would one need adaptive controllers?) For a long time, the passivity condition had been considered very restrictive (and rather obscure) and at some point the adaptive control community has been trying to drop it and to do without it. The passivity condition has been somewhat mitigated when it was shown that stability with adaptive controllers could be guaranteed even for the non-SPR system (1)- (2) if there exists a constant output feedback gain (unknown and not needed for implementation), such that a fictitious closed-loop system with the system matrix is SPR, namely, it satisfy the passivity conditions (16)- (17). Because in this case the original system (1)- (2) was only separated by a simple constant output feedback from strict passivity, it was called "Almost Strictly Positive Real (ASPR)" or "Almost Strictly Passive (ASP)" , BIB002 . Note that such ASP systems are sometimes called BIB003 , (FradkovHill, 1998 ) "feedback passive" or "passifiable." However, as we will show that any stabilizable system is also passifiable via parallel feedforward, those systems that are only at the distance of a constant feedback gain from Strict Passivity deserve a special name. At the time, this "mitigation" of the passivity conditions did not make a great impression, because it was still not clear what systems would satisfy the new conditions. (Some even claimed that if SPR seemed to be another name for the void class of systems, the "new" class of ASPR was only adding the boundary.) Nonetheless, some ideas were available. Because a constant output gain feedback was supposed to stabilize the system, it seemed apparent that the original plant was not required to be stable. Also, because it was known that SPR systems were minimum-phase and that the product CB is Positive Definite Symmetric (PDS), it was intuitive to assume that minimumphase systems with Positive Definite Symmetric CB were natural ASPR candidates . Indeed, simple Root-locus techniques were sufficient to proof this result in SISO systems, and many examples of minimumphase MIMO systems with CB product PDS were shown to be ASPR , BIB002 . However, it was not clear how many of such MIMO system actually were ASPR. Because the ASPR property can be stated as a simple condition and because it is the main condition needed to guarantee stability with adaptive controllers, it is useful to present here the ASPR theorem for the general multi-input-multi-output systems: Theorem 1. Any linear time-invariant system with the state-space realization {A, B, C}, where A ∈ R n * n B ∈ R n * m ,C ∈ R m * n , with the m*m transfer function T (s) = C(sI − A) −1 B, that is minimum-phase and where the matrical product CB is PDS, is "almost strictly passive (ASP)" and its transfer function "almost strictly positive real (ASPR)." Although the original plant is not SPR, a (fictitious) closed-loop system satisfies the SPR conditions, or in other words, there exist two positive definite symmetric (PDS) matrices, P and Q, and a positive definite gain such that the following two relations are simultaneously satisfied: As a matter of fact, a proof of Theorem 1 had been available in the Russian literature since 1976 yet it was not known in the West. Here, many other works have later independently rediscovered, reformulated, and further developed the idea (see BIB004 and references therein for a brief history and for a simple and direct, algebraic, proof of this important statement). Even as late as 1999, this simple ASPR condition was still presented as some algebraic condition that might look obscure to the control practitioner. On the other hand, managed to add an important contribution and emphasize the special property of ASPR systems by proving that if a system cannot be made SPR via constant output feedback, no dynamic feedback can render it SPR. Theorem 1 has thus managed to explain the rather obscure passivity conditions with the help of new conditions that could be understood by control practitioners. It is useful to notice an important property that may makes an ASPR system to be a good candidate for stable adaptive control: if a plant is minimum-phase and its input-output matrical product CB is Positive Definite Symmetric (PDS) it is stabilizable via some static Positive Definite (PD) output feedback. Furthermore, if the output feedback gain is increased beyond some minimal value, the system remains stable even if the gain increase is nonstationary. The required positivity of the product CB could be expected, as it seemed to be a generalization of the sign of the transfer function that allows using negative feedback in SISO systems. However, although at the time it seemed to be absolutely necessary for the ASPR conditions, the required CB symmetry proved to be rather difficult to fulfill in practice, in particular in adaptive control systems where the plant parameters are not known. After many attempts that have ended in failure, a recent publication has managed to eliminate the need for a symmetric CB. First, it was easy to observe that the Lyapunov function remains positive definite if the gain term is rewritten as follows:
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> SIMPLE ADAPTIVE CONTROL (SAC), OR THE SIMPLIFIED APPROACH TO MODEL REFERENCE ADAPTIVE CONTROL <s> Adaptive model reference procedures which do not require explicit parameter identification are considered for large scale systems. Such application is feasible provided that there exists a feed-back gain matrix such that the resulting input-output transfer function is strictly positive real. Consideration of a simply supported beam shows the positive real condition to be satisfied for velocity and velocity plus scaled positional outputs sensed at the same points where the actuators are positioned. Results show the adaptive algorithm to indeed be capable of satisfactory output model following performance with all beam states stabilized. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> SIMPLE ADAPTIVE CONTROL (SAC), OR THE SIMPLIFIED APPROACH TO MODEL REFERENCE ADAPTIVE CONTROL <s> Model reference adaptive control procedures that do not require explicit parameter identification are considered for large structural systems. Although such applications have been shown to be feasible for mu Hi variable systems, provided there exists a feedback gain matrix which makes the resulting input/output transfer function strictly positive real, it is now shown that this constraint is overly restrictive and that only positive realness is required. Subsequent consideration of a simply supported beam shows that if actuators and sensors are collocated, then the positive realness constraint will be satisfied and the model reference adaptive control will then indeed be suitable for velocity following when only velocity sensors are available and for both position and velocity following when velocity plus scaled position outputs are measured. In both cases, all states, regardless of system dimension, are guaranteed to be stable. HE need for parameter estimation and/or adaptive control of any system arises because of ignorance of the system's internal structure and critical parameter values, as well as changing control regimes. A large structural system (LSS) is substantially more susceptible to these problems. The most crucial problem of adaptive control of large structures is that the plant is very large or infinite-dime nsional and, consequently, the adaptive controller must be based on a loworder model of the system in order to be implemented with an on-line/onboard computer. However, any controller based on a reduced-order model (ROM) must operate in closed loop with the actual system; thus it interacts not only with the ROM but also with the residual subsystem (through the spillover and model error terms). One particular adaptive algorithm that seems applicable to LSS is the direct (or implicit) model reference-based approach taken by Sobel et al. 1'2 In particular, using command generator tracker (CGT) theory,3 with Lyapunov stabilitybased design procedures, they were able to develop for step commands a model reference adaptive control (MRAC) algorithm that, without the need for parameter identification, forced the error between plant and model (which need not be of the same order as the plant) to approach zero, provided that certain plant/model structural conditions are satisfied. Such an adaptation algorithm is very attractive for the control of large structural systems since it eliminates the need for explicitly identifying the large number of modes that must be modeled, and, furthermore, eliminates the spillover effects. Relative to the conditions that must be satisfied, it was shown that asymptotic stability results provided that the plant input/output transfer matrix is strictly positive real for some feedback gain matrix and provided that there exists a bounded solution to the corresponding deterministic CGT problem. Such a solution, however, does not always exist for structural problems with velocity sensors and, furthermore, the transfer matrix for structural systems is positive real (not strictly positive real) for collocated actuators and rate sensors.4 <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> SIMPLE ADAPTIVE CONTROL (SAC), OR THE SIMPLIFIED APPROACH TO MODEL REFERENCE ADAPTIVE CONTROL <s> Introducing readers to adaptive systems in a rigorous but elementary fashion, this text emphasizes the mainstream developments in adaptive control and signal processing of linear discrete time systems. A unified framework is developed whereby the reader can analyze and understand any adaptive system in the literature. The so-called equilibrium analysis facilitates an understanding of the limitations and potential of adaptive systems in a transparent fashion; while the behavioural approach to linear systems plays an essential role at some key points in the text. So-called universal controllers are presented in some detail. Each chapter is accompanied by exercises that aim to develop certain aspects of the theory, as well as to give the reader a better understanding of the actual behaviour of adaptive systems. <s> BIB003
Various kinds of additional prior knowledge have been used and many solutions and additions have been proposed to overcome some of the various drawbacks of the basic MRAC algorithm. However, this paper sticks to the very basic idea of Model Following. Next sections will show that those basically ingenious adaptive control ideas and the systematic stability analysis they introduced had finally led to adaptive control systems that can guarantee stability robustness along with superior performance when compared with alternative, non-adaptive, methodologies. In this section we will first assume that at least one of the passivity conditions presented above holds and will deal with a particular methodology that managed to eliminate the need for the plant order and therefore can mitigate the problems related to "unmodeled dynamics" and "persistent excitation." Subsequent sections will then extend the feasibility of the methodology to those real-world systems that do not inherently satisfy the passivity conditions. The beginning of the alternative adaptive control approach can be found in the intense activities at Rensselaer (RPI) during [1978] [1979] [1980] [1981] [1982] [1983] . At that time, such researchers as Kaufman, Sobel, Barkana, Balas, Wen, and others (Sobel, Kaufman and Mabus, 1982) , BIB001 , , BIB002 , BIB002 , were trying to use customary adaptive control techniques with large order MIMO systems, such as planes, large flexible structures, etc. It did not take long to realize that it was impossible to even think of controllers of the same order as the plant, or even of the order of a "nominal" plant. Besides, those were inherently MIMO systems, while customary MRAC techniques at the time were only dealing with SISO systems. Because now the very reduced-order model could not be considered to be even close to the plant, one could not consider full model state following, so this aim was naturally replaced by output model following. Furthermore, as the (possibly unstable) large-order plant state could not be compared with the reduced-order model state, the model could not be thought to guarantee asymptotic stability of the plant any longer. In order to allow stability of the reduced order adaptive control system, new adaptive control components that were not deemed to be needed by the customary MRAC had to be considered. We will show that this "small" addition had an astonishing effect towards the successful application of the modified MRAC. In brief, as it was known that stability of adaptive control systems required that the plant be stabilizable via a constant gain feedback, the natural question was why not using this direct output feedback. Following this idea, an additional adaptive output feedback term was added to the adaptive algorithm that otherwise is very similar to the usual MRAC algorithms, namely, where we denote the reference vector Subsequently in this paper, it will be shown that the new approach uses the model as a Command Generator and therefore it is sometime called Adaptive Command Generator Tracker. Because it also uses low-order models and controllers, it was ultimately called Simple Adaptive Control (SAC). Before we discuss the differences between the new SAC approach and to adaptive control classical MRAC, it is useful to first dwell over the special role of the direct output feedback term. If the plant parameters were known, one could choose an appropriate gain K e and stabilize the plant via constant output feedback control As we already mentioned above, it was known that an ASPR system (or, as we now know, a minimum-phase plant with appropriate CB product) could be stabilized by a positive definite output feedback gain. Furthermore, it was known that ASPR systems are high-gain stable, so stability of the plant is maintained if the gain value happens to go arbitrarily high beyond some minimal value. Whenever one may have sufficient prior knowledge to assume that the plant is ASPR, yet does not have sufficient knowledge to choose a good control gain, one can use the output itself to generate the adaptive gain by the rule: and the control In the more general case when the plant is required to follow the output of the model, one would use the tracking error to generate the adaptive gainK and the control We will show how this adaptive gain addition is able to avoid some of the most difficult inherent problems related to the standard MRAC and to add robustness to its stability. Although it was developed as a natural compensation for the low-order models and was successfully applied at Rensselaer as just one element of the Simple (Model Reference) Adaptive Control methodology, it is worth mentioning that, similarly to the first proof of the ASPR property, the origins of this specific adaptive gain can again be found in an early Fradkov's work in the Russian literature. Besides, later on this gain received a second birth and became very popular after 1983 in the context of adaptive control "when the sign of high-frequency gain is unknown." In this context , , and after a very rigorous mathematical treatment , it also received a new name and it is sometimes called the ByrnesWillems gain. Its useful properties have been thoroughly researched and some may even call this one adaptive gain Simple Adaptive Control as they were apparently able to show that it can do "almost" everything (Ilchman, Owens and PratzelWolters, 1987) , BIB003 . Indeed, if an ASPR system is high-gain stable, it seems very attractive to let the adaptive gain increase to even very high values in order to achieve good performance that is represented by small tracking errors. However, although at first thought one may find that high gains are very attractive, a second thought and some more engineering experience with the real world applications make it clear that high gains may lead to saturations and may excite vibrations and other disturbances. These disturbances may not have appeared in the nominal plant model that was used for design and may not be felt in the realworld plant unless one uses those very high gains. Furthermore, as the motor or the plant dynamics would always require an input signal in order to keep moving and tracking the desired trajectory, it is quite clear that the tracking error cannot be zero or very small unless one uses very high gains indeed. Designers of tracking systems know that feedforward signals that come from the desired trajectory can help achieving low-error or even perfect tracking without requiring the use of dangerously high gains (and, correspondingly, exceedingly high bandwidth) in the closed loop. In the non-adaptive world, feedforward could be problematic because unlike the feedback loop, any errors in the feedforward parameters are directly and entirely transmitted to the output tracking error. Here, the adaptive control methodology can demonstrate an important advantage on the nonadaptive techniques, because the feedforward parameters are finely tuned by the very tracking er- ror they intend to minimize. The issues discussed here and the need for feedforward again seem to show the intrinsic importance of the basic Model Following idea, and again point to the need for a model. However, the difference between the model used by SAC and the Model Reference used by the standard MRAC is that this time the so-called "Model" does not necessarily have to reproduce the plant besides incorporating the desired inputoutput behavior of the plant. At the extreme, it could be just a first-order pole that performs a reasonable step-response, or otherwise a higher order system, just sufficiently high to generate the desired trajectory. As it generates the command, this "model" can also be called "Command Generator" (Brousard and Berry, 1978) and the corresponding technique "Command Generator Tracker (CGT)." In summary, the adaptive control system monitors all available data, namely, the tracking error, the model states and the model input command and uses them to generate the adaptive control signal (Figure 2 ) that using the concise notations (27)- (28) giveṡ and the control It is worth noting that, initially, SAC seemed to be a very modest alternative to MRAC with apparently very modest aims and that also seemed to be very restricted by new conditions. Although at the time it probably was the only adaptive technique that could have been used in MIMO systems and with such large systems as large flexible structures, and therefore was quite immediately adopted by many researchers and practitioners, the SAC approach got a cold reception and for a long time has been largely ignored by the mainstream adaptive control. In retrospective (besides some lack of good selling) at the time this cold reception had some good reasons. Although it was called "simple" as it was quite simple to implement, the theory around SAC was not simple and many tools that were needed to support its qualities and that, slowly and certainly, revealed themselves over the year, were still missing. It subsequently not only required developing new analysis tools but also, probably even more important, better expertise at understanding their implications before they could be properly used so that they ultimately managed to highlight the very useful properties of SAC. Finally, based on developments that had spanned over more than 25 years, we will attempt to show that SAC is in fact the stable MRAC, because right from the beginning it avoids some difficulties that are inherent in the standard MRAC. First, it is useful to notice that because there is no attempt at comparison between the order or the states of the plant and the model, there is no "unmodeled dynamics." Also, because basically the stability of the system rests on the direct output feedback adaptive gain, the model is immaterial in this context and of course there is no need to mention "sufficient excitation." Besides, as we will later show and as it was observed by almost all practitioners that have tried to use it, SAC proved to be good control. While the standard MRAC may have to explain why it does not work when it is supposed to work, SAC may have to explain why it does work even in cases when the (sufficient) conditions are not fully satisfied. Although, similarly to any nonstationary control, in Adaptive Control it is very difficult to find the very minimal conditions that would keep the system stable, it can be shown why SAC may demonstrate some robustness even when the basic sufficient conditions are not satisfied. We note that this last point is just an observation based on experience, yet we must also note that in those cases when the basic conditions are fulfilled, they are always sufficient to guarantee the stability of the adaptive control system, with no exceptions and no counterexamples. In this respect, one can show that the MRAC "counterexamples" become just trivial, stable, and well behaving examples for SAC.
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PROOF OF STABILITY OF SIMPLE ADAPTIVE CONTROL <s> Model reference adaptive control procedures that do not require explicit parameter identification are considered for large structural systems. Although such applications have been shown to be feasible for mu Hi variable systems, provided there exists a feedback gain matrix which makes the resulting input/output transfer function strictly positive real, it is now shown that this constraint is overly restrictive and that only positive realness is required. Subsequent consideration of a simply supported beam shows that if actuators and sensors are collocated, then the positive realness constraint will be satisfied and the model reference adaptive control will then indeed be suitable for velocity following when only velocity sensors are available and for both position and velocity following when velocity plus scaled position outputs are measured. In both cases, all states, regardless of system dimension, are guaranteed to be stable. HE need for parameter estimation and/or adaptive control of any system arises because of ignorance of the system's internal structure and critical parameter values, as well as changing control regimes. A large structural system (LSS) is substantially more susceptible to these problems. The most crucial problem of adaptive control of large structures is that the plant is very large or infinite-dime nsional and, consequently, the adaptive controller must be based on a loworder model of the system in order to be implemented with an on-line/onboard computer. However, any controller based on a reduced-order model (ROM) must operate in closed loop with the actual system; thus it interacts not only with the ROM but also with the residual subsystem (through the spillover and model error terms). One particular adaptive algorithm that seems applicable to LSS is the direct (or implicit) model reference-based approach taken by Sobel et al. 1'2 In particular, using command generator tracker (CGT) theory,3 with Lyapunov stabilitybased design procedures, they were able to develop for step commands a model reference adaptive control (MRAC) algorithm that, without the need for parameter identification, forced the error between plant and model (which need not be of the same order as the plant) to approach zero, provided that certain plant/model structural conditions are satisfied. Such an adaptation algorithm is very attractive for the control of large structural systems since it eliminates the need for explicitly identifying the large number of modes that must be modeled, and, furthermore, eliminates the spillover effects. Relative to the conditions that must be satisfied, it was shown that asymptotic stability results provided that the plant input/output transfer matrix is strictly positive real for some feedback gain matrix and provided that there exists a bounded solution to the corresponding deterministic CGT problem. Such a solution, however, does not always exist for structural problems with velocity sensors and, furthermore, the transfer matrix for structural systems is positive real (not strictly positive real) for collocated actuators and rate sensors.4 <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PROOF OF STABILITY OF SIMPLE ADAPTIVE CONTROL <s> Recent publications have presented successful implementations of simple direct adaptive control techniques in various applications. However, they also expose the fact that the convergence of the adaptive gains has remained uncertain. The gains may not converge to the ideal constant control gains predicted by the underlying linear time-invariant system considerations. As those prior conditions that were also needed for stability may not hold, this conclusion may raise doubts about the robustness of the adaptive system. This paper intends to show that the adaptive control performs perfect tracking even when the linear time-invariant solution does not exist. It is shown that the adaptation performs a ‘steepest descent’ minimization of the errors, ultimately ending with the appropriate set of control gains that fit the particular input command and initial conditions. The adaptive gains do asymptotically reach an appropriate set of bounded constant ideal gain values that solve the problem at task. Copyright © 2004 John Wiley & Sons, Ltd. <s> BIB002
One can easily prove that the WASP conditions are sufficient to prove stability using just the simple adaptive output feedback gain (32) . However, in order to avoid any misunderstandings related to the role of the unknown matrix W , here we chose to present a rigorous proof of stability for the general output model tracking case. As usual in adaptive control, one first assumes that the underlying fully deterministic output model tracking problem is solvable. A recent publication BIB002 shows that if the Model Reference uses a step input in order to generate the desired trajectory, the underlying tracking problem is always solvable. If, instead, the model input command is itself generated by an unknown system of order n u , the model is required to be sufficiently large to accommodate this command BIB001 , ), or We assume that the plant to be controlled is minimum-phase and that the CB product is Positive Definite and diagonalizable though not necessarily symmetric. As we showed, the plant is WASP according to Definition 2, so it satisfies conditions (22)- (23). Under these assumptions one can use the Lyapunov function (24). Differentiating (24) and using the W-passivity relations, finally leads to the following derivative of the Lyapunov function (Appendix A) One can see thatV (t) in (40) is negative definite with respect to e x (t), yet only negative semidefinite with respect to the entire statespace {e x (t), K(t)}. A direct result of Lyapunov stability theory is that all dynamic values are bounded. According to LaSalle's Invariance Principle , all state-variables and adaptive gains are bounded and the system ultimately ends within the domain defined byV (t) ≡ 0. BecauseV (t) is negative definite in e x (t), the system thus ends with e x (t) ≡ 0, that in turn implies e y (t) ≡ 0. In other words, the adaptive control system demonstrates asymptotic convergence of the state and output error and boundedness of the adaptive gains.
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> Model reference adaptive control procedures that do not require explicit parameter identification are considered for large structural systems. Although such applications have been shown to be feasible for mu Hi variable systems, provided there exists a feedback gain matrix which makes the resulting input/output transfer function strictly positive real, it is now shown that this constraint is overly restrictive and that only positive realness is required. Subsequent consideration of a simply supported beam shows that if actuators and sensors are collocated, then the positive realness constraint will be satisfied and the model reference adaptive control will then indeed be suitable for velocity following when only velocity sensors are available and for both position and velocity following when velocity plus scaled position outputs are measured. In both cases, all states, regardless of system dimension, are guaranteed to be stable. HE need for parameter estimation and/or adaptive control of any system arises because of ignorance of the system's internal structure and critical parameter values, as well as changing control regimes. A large structural system (LSS) is substantially more susceptible to these problems. The most crucial problem of adaptive control of large structures is that the plant is very large or infinite-dime nsional and, consequently, the adaptive controller must be based on a loworder model of the system in order to be implemented with an on-line/onboard computer. However, any controller based on a reduced-order model (ROM) must operate in closed loop with the actual system; thus it interacts not only with the ROM but also with the residual subsystem (through the spillover and model error terms). One particular adaptive algorithm that seems applicable to LSS is the direct (or implicit) model reference-based approach taken by Sobel et al. 1'2 In particular, using command generator tracker (CGT) theory,3 with Lyapunov stabilitybased design procedures, they were able to develop for step commands a model reference adaptive control (MRAC) algorithm that, without the need for parameter identification, forced the error between plant and model (which need not be of the same order as the plant) to approach zero, provided that certain plant/model structural conditions are satisfied. Such an adaptation algorithm is very attractive for the control of large structural systems since it eliminates the need for explicitly identifying the large number of modes that must be modeled, and, furthermore, eliminates the spillover effects. Relative to the conditions that must be satisfied, it was shown that asymptotic stability results provided that the plant input/output transfer matrix is strictly positive real for some feedback gain matrix and provided that there exists a bounded solution to the corresponding deterministic CGT problem. Such a solution, however, does not always exist for structural problems with velocity sensors and, furthermore, the transfer matrix for structural systems is positive real (not strictly positive real) for collocated actuators and rate sensors.4 <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> This paper addresses the problem of designing model-reference adaptive control for linear MIMO systems with unknown high-frequency gain matrix (HFGM). The concept of hierarchy of control is introduced leading to a new control parametrization and an error equation with triangular HFGM, which allows a sequential design of the adaptation scheme. Significant reduction of the prior knowledge about the HFGM is achieved, overcoming the limitations of the known methods. A complete stability and convergence analysis is developed based on a new class of signals and their properties. Exponential stability is guaranteed under explicit persistency of excitation conditions. <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> Recent publications have presented successful implementations of simple direct adaptive control techniques in various applications. However, they also expose the fact that the convergence of the adaptive gains has remained uncertain. The gains may not converge to the ideal constant control gains predicted by the underlying linear time-invariant system considerations. As those prior conditions that were also needed for stability may not hold, this conclusion may raise doubts about the robustness of the adaptive system. This paper intends to show that the adaptive control performs perfect tracking even when the linear time-invariant solution does not exist. It is shown that the adaptation performs a ‘steepest descent’ minimization of the errors, ultimately ending with the appropriate set of control gains that fit the particular input command and initial conditions. The adaptive gains do asymptotically reach an appropriate set of bounded constant ideal gain values that solve the problem at task. Copyright © 2004 John Wiley & Sons, Ltd. <s> BIB003 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> Abstract Recent publications have shown that under some conditions continuous linear time-invariant systems become strictly positive real with constant feedback. This paper expands the applicability of this result to discrete linear systems. The paper shows the sufficient conditions that allow a discrete system to become stable and strictly passive via static (constant or nonstationary) output feedback. <s> BIB004 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> Recent publications have shown that under some conditions linear time-invariant systems become strictly positive real with constant feedback. To expand the applicability of this result to nonstationary and nonlinear systems, this paper first reviews a few pole-zero dynamics definitions in nonstationary systems and relates them to stability and passivity of the systems. The paper then shows the sufficient conditions that allow a system to become stable and strictly passive via static (constant or nonstationary) output feedback. Applications in robotics and adaptive control are also presented. <s> BIB005 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> On gain convergence, basic conditions for stability, optimality, robustness, etc. <s> In this paper, a Nonlinear Direct Model Reference Adaptive Control (NDMRAC) i s derived. The NDMRAC controller is compared to the Full State Feedback (FSFB) controller. Both of the controllers are applied to a rigid body spacecraft. To compare the controllers, the inertia matrix is suddenly changed in the simulation. Euler equations are used to estimate the evolution of the rigid body angular velocity and quaternions are used to describe the attitude position of the rigid body. The system is augmented or modified to account for the disturbances affecting the system under observation, so the NDMRAC control also implements a Direct Adaptive Disturbance Rejection (DADR) control which partially or fully eliminates the disturbance coming into the simulated system. The error of the system and the power spectrum density of the disturbance ar e used to analyze the performance of t he NDMRAC and DADR controllers. <s> BIB006
Some particularly interesting questions may arise during the proof of stability. First, although the Lyapunov function was carefully selected to contain both the state error and the adaptive gains, the derivative only contains the state error. It appears as if the successful proof of stability has "managed" to eliminate any possibly negative effect of the adaptive gains. One is then entitled to ask what positive role the adaptive gains play (besides not having negative effects). This is just one more illustration of the difficulties related to the analysis of nonlinear systems. Indeed, although Lyapunov stability theory manages to prove stability, it cannot and does not provide all answers. Besides, as potential counterexamples seem to show, although the tracking error and the derivative of the adaptive gains tend to vanish, this mere result does not necessarily imply, as one might have initially thought, that the adaptive gains would reach a constant value or even a limit at all. If the adaptive gain happens to be a function such as k(t) = sin(ln t) (suggested to us by Mark Balas), its derivative isk(t) = cos(ln t)/t. In this example one can see that although the derivative tends to vanish in time, the gain k(t) itself does not reach any limit at all. Therefore, the common opinion that seems to be accepted among experts is that the adaptive gains do not seem to converge unless the presence of some "sufficient" excitation can be guaranteed. This seem to imply that even in the most ideal, perfect following, situations, the adaptive control gains may continue wandering for ever. However, recent results have shown that these open questions and problems are only apparent. First, even if it is not a direct result of Lyapunov analysis, one can show that the adaptive control gains always perform a steepest descent minimization of the tracking error BIB003 . Although this "minimum" could still increase without bound in general, if the stability of the system were not guaranteed, yet this is not the case with SAC. Second, with respect to the final gain values, when one tests an adaptive controller with a given plant, one first assumes that an underlying LTI solution for the ideal control gains exists, and then the adaptive controller is supposed to find those gains at the end of the adaptation. If the plant is known, one can first solve the deterministic tracking problem and find the ideal control gains. Then, the designer proceeds with the implementation of the adaptive controller and expects it to converge to the pre-computed ideal solution. In practice, however, one observes that, even though the tracking errors do vanish, the adaptive gains do not seem to converge. , , besides very few exceptions they don't seem to be widely used in Adaptive Control or in nonlinear control systems in general. This fact could be partially explained by the fact that the results are of a very general character. However, their proper interpretation and application towards the development of new basic analysis tools such as combining a Modified Invariance Principle with Gromwall-Bellman Lemma BIB001 , BIB003 , , finally managed to provide the solution to this problem. It was shown that if the adaptive control gains do not reach the "unique" solution that the preliminary LTI design seemed to suggest, it is not because something was wrong with the adaptive controller, but rather because the adaptive control can go beyond the LTI design. The existence of a "general" LTI solution is useful in facilitating and shorting the proof of stability, yet it is not needed for the convergence of the adaptive controller. While the sought after stationary controller must provide a fixed set of constant gains that would fit any input commands, the adaptive controller only needs that specific set of control gains that correspond to the particular input command. Even in those cases when the general LTI solution does not exist, the particular solution that the adaptive controller needs does exist BIB003 . However, it complicates the stability analysis because it was shown that those particular solutions may allow perfect following only after a transient that adds supplementary terms to the differential equations of motion. As a consequence, the stability analysis may end with the derivative of Lyapunov function beinġ Although the derivative (41) still contains the negative definite term with respect to the error state, it also contains a transient term that is not negative, so the derivative is not necessarily negative definite or even semidefinite. Apparently, (41) cannot be used for any decision on stability. However, although the decision on stability is not immediate, the Modified Invariance Principle reveals that all bounded solutions of the adaptive system reach asymptotically the domain defined by Therefore, one must find out what those "bounded trajectories" are and it is the role of GromwallBellman Lemma to actually show that, under the WASP assumption, all trajectories are bounded. Therefore, the previous conclusions on asymptotically perfect tracking remain valid. Moreover, because the gains also reach that domain in space where perfect tracking is possible, this approach has also finally provided the answer to the (previously open) question on the adaptive gain convergence. Even if one assumes that the final asymptotically perfect tracking may occur while the adaptive gains continue to wander, one can show that the assumably nonstationary gains satisfy a linear differential equitation with constant coefficients and their solution is a summation of generalized exponential functions ( BIB003 and Appendix B). This partial conclusion immediately shows that such nonlinear "counterexample" gains as that we presented above are, maybe, nice and tough mathematical challenges, yet they cannot be solutions of, and thus are actually immaterial for, the SAC tracking problem. Furthermore, because the gains are bounded, they can only be combinations of constants and converging exponentials, so they must ultimately reach constant values. Therefore, we were finally able to show (at least within the scope of SAC) that the adaptive control gains do ultimately reach a set of stabilizing constant values at the end of a steepest descent minimization of the tracking error ( BIB003 and Appendix B). A recent paper tests SAC with a few counterexamples for the standard MRAC BIB002 . The paper shows that SAC not only maintains stability in all cases that led to instability with standard MRAC, but also demonstrates very good performance. Many practitioners that have tried it have been impressed with the ease of implementation of SAC and with its performance even in large and complex applications. Many examples seem to show that SAC maintains its stable operation even in cases when the established sufficient conditions do not hold. Indeed, conditions for stability of SAC have been continuously mitigated over the years, as the two successive definitions of almost passivity conditions presented in this paper may show. In order to get another qualitative estimate on SAC robustness, assume that instead of (1)- (2) the actual plant iṡ Assume that the nominal {A, B, C} system is WASP, while f (x) is some (linear or nonlinear) component that prevents the satisfaction of the passivity conditions. If one uses the same Lyapunov functions (24), instead of (40) one gets for the stabilization probleṁ and for the tracking probleṁ (46) where x * is the ideal trajectory, as defined in Appendix A. Note that the derivative of the Lyapunov function remains negative definite in terms of x(t) or e x (t), correspondingly, if the second term in the sum is not too large, as defined (for example) by the inequality BIB006 While until very recently the main effort has been dedicated to the clarification and relaxation of the passivity conditions, similar effort is dedicated now to clarifying the limits of robustness of SAC when the basic passivity conditions are not entirely satisfied. Besides, although much effort has been dedicated to clarification of passivity concepts in the context of Adaptive Control of stationary continuous-time systems, similar effort has been dedicated to extending these concepts to discretetime BIB004 and nonstationary and nonlinear systems BIB005 , .
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PARALLEL FEEDFORWARD AND STABILITY OF SIMPLE ADAPTIVE CONTROL <s> ABSTRACT Simple adaptive control systems were recently shown to be globally stable and to maintain robustness with disturbances if the controlled system is “almost strictly positive real” namely, if there exists a constant output feedback (unknown and not needed for implementation) such that the resulting closed loop transfer function is strictly positive real. In this paper it is shown how to use parallel feedforward and the stabi 1izability properties of systems in order to satisfy the “almost positivity” condition. The feedforward configuration may be constant, if some prior knowledge is given, or adaptive, in general. This way, simple adaptive controllers can be implemented in a large number of complex control systems, without requiring the order of the plant or the pole-excess as prior knowledge. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PARALLEL FEEDFORWARD AND STABILITY OF SIMPLE ADAPTIVE CONTROL <s> This paper deals with two problems for the improvement of the control performance of simple adaptive control (SAC) techniques. First, it is discussed that the introduction of a robust adaptive control term much robustifies the SAC system concerning plant uncertainties such as state dependent disturbance. Second, a practical procedure is described for designing the parallel feedforward compensator, which is necessary for the actual realization of the SAC system, given prior information concerning the plant such that: (1) the plant is minimum phase; (2) an upper bound on the relative degree exists; and (3) approximate values of high and low frequency gains are known. The effectiveness of the proposed methods is confirmed through the simulation of typical examples of adaptive control systems. <s> BIB002 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PARALLEL FEEDFORWARD AND STABILITY OF SIMPLE ADAPTIVE CONTROL <s> This paper presents theory for stability analysis and design for a class of observer-based feedback control systems. Relaxation of the controllability and observability conditions imposed in the Yakubovich-Kalman-Popov (YKP) lemma can be made for a class of nonlinear systems described by a linear time-invariant system (LTI) with a feedback-connected cone-bounded nonlinear element. It is shown how a circle-criterion approach can be used to design an observer-based state feedback control which yields a closed-loop system with specified robustness characteristics. The approach is relevant for design with preservation of stability when a cone-bounded nonlinearity is introduced in the feedback loop. Important applications are to be found in nonlinear control with high robustness requirements. <s> BIB003 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> PARALLEL FEEDFORWARD AND STABILITY OF SIMPLE ADAPTIVE CONTROL <s> Ar ecent publication uses a difficult design example to show that fuzzy logic might have advantages when compared with classical compensators. Although in this particular case the application was shown to be successful, convergence of the fuzzy-logic algorithm, as well as other time-varying controllers, cannot not be guaranteed unless some preliminary conditions are satisfied. It will be shown that further exploitation of the classical design can improve robust performance. This result is then used to create sufficient conditions that guarantee convergence with time-varying controllers, and it is then shown that simple adaptive control methods can further improve performance and maintain it in changing environments. I. Introduction A RECENT publication 1 has presented successful applications of fuzzy-logic control design in a nonminimum phase autopilot with uncertainty of parameters. The authors use this difficult design case to show that fuzzy logic has advantages when compared with a classical compensator or with the ubiquitous proportional‐ integral‐derivative (PID) design when uncertainty is concerned. Although this particular fuzzy-logic application was successful, it is well known that convergence with nonstationary controllers, including adaptive and fuzzy-logic algorithms, is not inherently guaranteed. This paper intends to show that further exploitation of the basic knowledge of the plant and the uncertainty can be used to improve the performance of a classical control design and also to create sufficient conditions that guarantee convergence of time-varying controllers. The results are presented here in connection with simple adaptive control that is shown to achieve improved performance along with the guarantee of stability. Successful implementations of simple direct adaptive control techniques in various domains of application have been presented over the past two decades in the technical literature. This simpleadaptive-control (SAC) methodology has been introduced by Sobel et al. 2 and further developed by Barkana et al. 3 and Barkana and Kaufman. 4,5 These techniques have also been extended by Wenn and Balas 6 and Balas 7 to infinite-dimensional systems. Those successful applications of low-order adaptive controllers to large-scale examples have led to successful implementations of SAC in such diverse applications as flexible structures, 8−15 flight control, 16,17 <s> BIB004
Using for illustration the example of Section VIII, assume that K M AX = 2.5 is an estimate of the highest admissible constant gain that maintains stability of the system. One would never use this value because it would not be a good control gain value. Indeed, we only use the mere knowledge that a (fictitious) closed-loop system using the high gain value of 2.5 would still be stable. Instead of implementing constant output feedback we use this knowledge in order to augment the system with a simple Parallel Feedforward Configuration (PFC) across the plant. If the original plant has transfer function the closed-loop system would be and would be asymptotically stable. The augmented system using the inverse of the stabilizing and if the closed-look system would be stable, one can see that the augmented system is minimumphase (Figure 8 ). Note that although we would never suggest using direct input-output gains in parallel with the plant, this is a simple and useful illustration that may facilitate the understanding of the basic idea. Also, although in this paper we only dealt (and will continue to deal) with strictly causal systems, for this specific case it is useful to recall that a minimum-phase plant with relative degree 0 (zero) is also ASPR. As (53) shows, one could use the inverse of any stabilizing gain in order to get ASPR configurations. However, any such addition is added ballast to the original plant output, so using the inverse of the maximal allowed gain adds the minimal possible alteration to the plant output. The augmented system looks as follows: The augmented system has three poles and three zeros and all zeros are minimum-phase. Such a system cannot become unstable, no matter how large the constant gain k becomes, yet because it is ASPR one can also show that it would also stay stable no matter how large the nonstationary adaptive gain k(t) becomes. One can easily see that the parallel feedforward has made the effective control gain that affects the plant to be: One can see that the effective gain is always below the maximal admissible constant gain (Figure 9 ). While this qualitative demonstration intends to provide some intuition to the designer that is used to estimate stability in terms of gain and phase margins, rigorous proofs of stability using the Lyapunov-LaSalle techniques and almost passivity conditions are also available and provide the necessary rigorous proof of stability. As we already mentioned above, the constant parallel feedforward has only been presented here for a first intuitive illustration. In practice, however, one does not want to use direct input-output across the plant that would require solving implicit loops that include the adaptive gain com- putations. Therefore, we go to the next step that takes us to the ubiquitous PD controllers. In practice, many control systems use some form of PD controller, along with other additions that may be needed to improve performance. While the additions are needed to achieve the desired performance, in many cases the PD controller alone is sufficient to stabilize the plant. In our case, a PD controller H(s) would make the Root-locus plot to look like (Figure 10 ) The system is asymptotically stable for any fixed gain within the "admissible" range 0 -2.66, so we again choose K M AX = 2.5 as an estimate of the highest admissible constant gain that maintains stability of the system. This time however we use D(s) = 1/H(s), the inverse of the PD controller, as the parallel feedforward across the plant. The Root-locus of the resulting augmented plant is shown in Figure 11 . This is a strictly causal system with 4 poles and 3 strictly minimum-phase zeros and is therefore, ASPR. Although the original plant was non-minimum phase and this fact would usually forbid using adaptive controllers, here one can apply SAC and be sure that stability and asymptotically perfect tracking of the augmented system is guaranteed. The only open question is how well the actual plant output performs. In this respect, the maximal admissible gain with fictitious PD (or with any other fictitious controller) defines how small the added ballast is and how close the actual output is to the augmented output. The example here is a very bad system and was only used to illustrate the problems one may encounter using constant gain in changing environments and cannot be expected to result in good behavior without performing much more study and basic control design. The examples above have been used to present a simple principle: if the system can be stabilized by the controller H(s), then the augmented system G a (s) = G(s) + H −1 (s) is minimum-phase. Proper selection of the relative degree of H −1 (s) will thus render the augmented system ASPR BIB001 . This last statement implies that "passivability" of systems is actually dual to stabilizability. If a stabilizing controller is known, its inverse in parallel with the plant can make the augmented system ASPR. When sufficient prior knowledge is available to design a stabilizing controller, some researchers prefer to use this knowledge and directly design the corresponding parallel feedforward BIB002 or "shunt" . When the "plant" is a differential equation, it is easy to assume that the order or the relative degree is available and then a stabilizing controller or the parallel feedforward can be implemented. However, in real world, where the "plant" could be a plane, a flexible structure or a ship, the available knowledge is the result of some wind-tunnel or other experimental tests that may result in some approximate frequency response or approximate modeling, sufficient to allow some control design, yet in general do not provide reliable knowledge on the order or relative degree of the real plant. On the other hand (although it may very much want some adaptive control to help improving performance if it only could be trusted), the control community actually continues to control real-world systems with fixed controllers. Therefore, in our opinion the question "How can you find a stabilizing controller?" should not be given any excessive emphasis. In any case, if there is sufficient prior knowledge to directly design the feedforward there is definitely sufficient information to design a stabilizing configuration, and vice versa. Note that the example of this section is a bad system that was on purpose selected to provide a counterexample for the stability with assumably "constant" gains. Although the stability of the augmented system with adaptive control is guaranteed, the plant output may not behave very well, even with the added parallel feedforward. In any case, even in those cases when the parallel feedforward is too large to allow good performance as monitored at the actual plant output, the behavior of the, possibly both unstable and non-minimum phase, plant within the augmented system is stable and it was shown to allow stable identification schemes BIB003 and thus, lead to better understanding of the plant towards better, adaptive or non-adaptive, control design. Still, as recently shown with a non-minimumphase UAV example BIB004 and with many other realistic examples , prior knowledge usually available for design allows using basic preliminary design and then very small additions to the plant that not only result in robust stability of the adaptive control system even with originally non-minimum phase plants, but that also lead to performance that is ultimately superior to other control methodologies. A recent publication uses the parallel feedforward compensator for safe tuning of MIMO Adaptive PID Controllers and another shows how to implement Simple Adaptive Controllers with guaranteed H ∞ performance.
SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> ROBUSTNESS OF SIMPLE ADAPTIVE CONTROL WITH DISTURBANCES <s> 1. Introduction.- 2. Continuous-time identifiers and adaptive observers.- 3. Discrete-time identifiers.- 4. Robustness improvement of identifiers and adaptive observers.- 5. Adaptive control in the presence of disturbances.- 6. Reduced-order adaptive control.- 7. Decentralized adaptive control.- 8. Reduced order-decentralized adaptive control.- Corrections. <s> BIB001 </s> SIMPLE ADAPTIVE CONTROL - A STABLE DIRECT MODEL REFERENCE ADAPTIVE CONTROL METHODOLOGY - BRIEF SURVEY <s> ROBUSTNESS OF SIMPLE ADAPTIVE CONTROL WITH DISTURBANCES <s> Recent publications have presented successful implementations of simple direct adaptive control techniques in various applications. However, they also expose the fact that the convergence of the adaptive gains has remained uncertain. The gains may not converge to the ideal constant control gains predicted by the underlying linear time-invariant system considerations. As those prior conditions that were also needed for stability may not hold, this conclusion may raise doubts about the robustness of the adaptive system. This paper intends to show that the adaptive control performs perfect tracking even when the linear time-invariant solution does not exist. It is shown that the adaptation performs a ‘steepest descent’ minimization of the errors, ultimately ending with the appropriate set of control gains that fit the particular input command and initial conditions. The adaptive gains do asymptotically reach an appropriate set of bounded constant ideal gain values that solve the problem at task. Copyright © 2004 John Wiley & Sons, Ltd. <s> BIB002
The presentation so far showed that a simple adaptive controller can guarantee stability of any system that is minimum-phase if the CB product is Positive Definite and diagonalizable if not symmetric. In case these conditions do not inherently hold, basic knowledge on the stabilizability properties of the plant, usually known, can be used to fulfill them via Parallel Feedforward Configurations. Therefore, the proposed methodology seems to fit almost any case where asymptotically perfect output tracking is possible. However, after we presented the eulogy of the adaptive output feedback gain (32), it is about time to also present what could become its demise, if not properly treated. When persistent disturbances such as random noise or very high frequency vibrations are present, perfect tracking is not possible. Even when the disturbance is known and various variations of the Internal Model Principle can be devised to filter them out, some residual tracking error may always be present. While tracking with small final errors could be acceptable, it is clear that the adaptive gain term (32) would, slowly but certainly, increase without limit. Indeed, theoretically, ASPR systems maintain stability with arbitrarily high gains and in some cases (in case of missiles, for example) the adaptive system mission could end even before problems are even observed. However, allowing the build-up of high gains that do not come in response to any actual requirement is not acceptable, because in practice they may lead to numerical problems and saturation effects. However, very early we observed how the robustness of SAC with disturbances can be guaranteed by adding Ioannou's σ-term BIB001 with the error adaptive gain that would now bė Finally, this new addition is literally making SAC an adaptive controller (see BIB002 and and references therein): while the control gains always perform a steepest descent minimization of the tracking error, the error gain defined in (55) goes up-and-down fitting the right gain to the right situation in accord with the changing operational needs.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Hand and Mind: What Gestures Reveal about Thought. David McNeill. Chicago and London: University of Chicago Press, 1992. 416 pp. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Preface Prologue: General introduction: Animal minds, human minds Kathleen Gibson A history of speculation on the relation between tools and language Gordon Hewes Part I. Word, Sign and Gesture: General introduction: Relations between visual-gestural and vocal-auditory modalities of communication Tim Ingold 1. Human gesture Adam Kendon 2. When does gesture become language? Susan Goldwin-Meadow 3. The emergence of language Sue Savage-Rumbaugh and Duane Rumbaugh 4. A comparative approach to language parallels Charles Snowdon Part II. Technological Skills and Associated Social Behaviors of the Non-Human Primates: Introduction: Generative interplay between technical capacities, social relations, imitation and cognition Kathleen Gibson 5. Capuchin monkeys Elisabetta Visalberghi 6. The intelligent use of tools William McGrew 7. Aspects of transmission of tool use in wild chimpanzees Christophe Boesch Part III. Connecting Up The Brain: Introduction: Overlapping neural control of language, gesture and tool use Kathleen Gibson 8. Disorders of language and tool use Daniel Kempler 9. Sex differences in visuospatial skills Dean Falk 10. The unitary hypothesis William H. Calvin 11. Tool use, language and social behaviour in relationship to information processing capacities Kathleen Gibson Part IV. Perspectives on Development: Introduction: Beyond neotony and recapitulation Kathleen Gibson 12. Human language development and object manipulation Andrew Lock 13. Comparative cognitive development Jonas Langer 14. Higher intelligence, propositional language and culture as adaptations for planning Sue Parker and Constance Milbrath Part V. Archaeological and Anthropological Perspectives: Introduction: Tools, techniques and technology Tim Ingold 15. Early stone industries and inferences regarding language and cognition Nicholas Toth and Kathy Schick 16. Tools and language in human evolution Iain Davidson and William Noble 17. Layers of thinking in tool behaviour Thomas Wynn 18. The complementation theory of language and tool use Peter Reynolds 19. Tool-use, sociality and intelligence Tim Ingold Epilogue: Technology, language, intelligence Tim Ingold Index. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures. We survey the literature on visual interpretation of hand gestures in the context of its role in HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. Important differences in the gesture interpretation approaches arise depending on whether a 3D model of the human hand or an image appearance model of the human hand is used. 3D hand models offer a way of more elaborate modeling of hand gestures but lead to computational hurdles that have not been overcome given the real-time requirements of HCI. Appearance-based models lead to computationally efficient "purposive" approaches that work well under constrained situations but seem to lack the generality desirable for HCI. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. Although the current progress is encouraging, further theoretical as well as computational advances are needed before gestures can be widely used for HCI. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human-computer interaction. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> This paper describes a dialogue system based on the recognition and synthesis of Japanese sign language. The purpose of this system is to support conversation between people with hearing impairments and hearing people. The system consists of five main modules: sign-language recognition and synthesis, voice recognition and synthesis, and dialogue control. The sign-language recognition module uses a stereo camera and a pair of colored gloves to track the movements of the signer, and sign-language synthesis is achieved by regenerating the motion data obtained by an optical motion capture system. An experiment was done to investigate changes in the gaze-line of hearing-impaired people when they read sign language, and the results are reported. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> A person stands in front of a large projection screen on which is shown a checked floor. They say, "Make a table," and a wooden table appears in the middle of the floor."On the table, place a vase," they gesture using a fist relative to palm of their other hand to show the relative location of the vase on the table. A vase appears at the correct location."Next to the table place a chair." A chair appears to the right of the table."Rotate it like this," while rotating their hand causes the chair to turn towards the table."View the scene from this direction," they say while pointing one hand towards the palm of the other. The scene rotates to match their hand orientation.In a matter of moments, a simple scene has been created using natural speech and gesture. The interface of the future? Not at all; Koons, Thorisson and Bolt demonstrated this work in 1992 [23]. Although research such as this has shown the value of combining speech and gesture at the interface, most computer graphics are still being developed with tools no more intuitive than a mouse and keyboard. This need not be the case. Current speech and gesture technologies make multimodal interfaces with combined voice and gesture input easily achievable. There are several commercial versions of continuous dictation software currently available, while tablets and pens are widely supported in graphics applications. However, having this capability doesn't mean that voice and gesture should be added to every modeling package in a haphazard manner. There are numerous issues that must be addressed in order to develop an intuitive interface that uses the strengths of both input modalities.In this article we describe motivations for adding voice and gesture to graphical applications, review previous work showing different ways these modalities may be used and outline some general interface guidelines. Finally, we give an overview of promising areas for future research. Our motivation for writing this is to spur developers to build compelling interfaces that will make speech and gesture as common on the desktop as the keyboard and mouse. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> We present a statistical approach to developing multimodal recognition systems and, in particular, to integrating the posterior probabilities of parallel input signals involved in the multimodal system. We first identify the primary factors that influence multimodal recognition performance by evaluating the multimodal recognition probabilities. We then develop two techniques, an estimate approach and a learning approach, which are designed to optimize accurate recognition during the multimodal integration process. We evaluate these methods using Quickset, a speech/gesture multimodal system, and report evaluation results based on an empirical corpus collected with Quickset. From an architectural perspective, the integration technique presented offers enhanced robustness. It also is premised on more realistic assumptions than previous multimodal systems using semantic fusion. From a methodological standpoint, the evaluation techniques that we describe provide a valuable tool for evaluating multimodal systems. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment. Because of many potentially important applications, “looking at people” is currently one of the most active application domains in computer vision. This survey identifies a number of promising applications and provides an overview of recent developments in this domain. The scope of this survey is limited to work on whole-body or hand motion; it does not include work on human faces. The emphasis is on discussing the various methodologies; they are grouped in 2-D approaches with or without explicit shape models and 3-D approaches. Where appropriate, systems are reviewed. We conclude with some thoughts about future directions. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, development of an automated system that accomplishes this task is rather difficult. There are several related problems: detection of an image segment as a face, extraction of the facial expression information, and classification of the expression (e.g., in emotion categories). A system that performs these operations accurately and in real time would form a big step in achieving a human-like interaction between man and machine. The paper surveys the past work in solving these problems. The capability of the human visual system with respect to these problems is discussed, too. It is meant to serve as an ultimate goal and a guide for determining recommendations for development of an automatic facial expression analyzer. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> The research topic of looking at people, that is, giving machines the ability to detect, track, and identify people and more generally, to interpret human behavior, has become a central topic in machine vision research. Initially thought to be the research problem that would be hardest to solve, it has proven remarkably tractable and has even spawned several thriving commercial enterprises. The principle driving application for this technology is "fourth generation" embedded computing: "smart" environments and portable or wearable devices. The key technical goals are to determine the computer's context with respect to nearby humans (e.g., who, what, when, where, and why) so that the computer can act or respond appropriately without detailed instructions. The paper examines the mathematical tools that have proven successful, provides a taxonomy of the problem domain, and then examines the state of the art. Four areas receive particular attention: person identification, surveillance/monitoring, 3D methods, and smart rooms/perceptual user interfaces. Finally, the paper discusses some of the research challenges and opportunities. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> An information kiosk with a JSL (Japanese sign language) recognition system that allows hearing-impaired people to easily search for various kinds of information and services was tested in a government office. This kiosk system was favorably received by most users. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> In this paper, we describe HandTalker: a system we designed for making friendly communication reality between deaf people and normal hearing society. The system consists of GTS (Gesture/Sign language To Spoken language) part and STG (Spoken language To Gesture/Sign language) part. GTS is based on the technology of sign language recognition, and STG is based on 3D virtual human synthesis. Integration of the sign language recognition and 3D virtual human techniques greatly improves the system performance. The computer interface for deaf people is data-glove, camera and computer display, and the interface for hearing-abled is microphone, keyboard, and display. HandTalker now can support no domain limited and continuously communication between deaf and hearing-abled Chinese people. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Research on recognition and generation of signed languages and the gestural component of spoken languages has been held back by the unavailability of large-scale linguistically annotated corpora of the kind that led to significant advances in the area of spoken language. A major obstacle has been the lack of computational tools to assist in efficient analysis and transcription of visual language data. Here we describe SignStream, a computer program that we have designed to facilitate transcription and linguistic analysis of visual language. Machine vision methods to assist linguists in detailed annotation of gestures of the head, face, hands, and body are being developed. We have been using SignStream to analyze data from native signers of American Sign Language (ASL) collected in our new video collection facility, equipped with multiple synchronized digital video cameras. The video data and associated linguistic annotations are being made publicly available in multiple formats. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs) including head movements, facial actions, and posture that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> We have created software for automatic synthesis of signing animations from the HamNoSys transcription notation. In this process we have encountered certain shortcomings of the notation. We describe these, and consider how to develop a notation more suited to computer animation. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> INTRODUCTION <s> Inspired by the Defense Advanced Research Projects Agency's (DARPA) previous successes in speech recognition, we introduce a new task for sign language recognition research: a mobile one-way American sign language translator. We argue that such a device should be feasible in the next few years, may provide immediate practical benefits for the deaf community, and leads to a sustainable program of research comparable to early speech recognition efforts. We ground our efforts in a particular scenario, that of a deaf individual seeking an apartment and discuss the system requirements and our interface for this scenario. Finally, we describe initial recognition results of 94% accuracy on a 141 sign vocabulary signed in phrases of fours signs using a one-handed glove-based system and hidden Markov models (HMMs). <s> BIB015
I N taxonomies of communicative hand/arm gestures, sign language (SL) is often regarded as the most structured of the various gesture categories. For example, different gesture categories have been considered as existing on a continuum, where gesticulation that accompanies verbal discourse is described as the least standardized and SL as the most constrained in terms of conventional forms that are allowed by the rules of syntax ( , BIB001 , Fig. 1a ). In Quek's taxonomy ( , Fig. 1b) , gestures are divided into acts and symbols, and SL is regarded as largely symbolic, and possibly also largely referential since modalizing gestures are defined as those occuring in conjunction with another communication mode, such as speech. In this view, SL appears to be a small subset of the possible forms of gestural communication. Indeed SL is highly structured and most SL gestures are of a symbolic nature (i.e., the meaning is not transparent from observing the form of the gestures), but these taxonomies obscure the richness and sophistication of the medium. SL communication involves not only hand/arm gestures (i.e., manual signing) but also nonmanual signals (NMS) conveyed through facial expressions, head movements, body postures and torso movements. Recognizing SL communication therefore requires simultaneous observation of these disparate body articulators and their precise synchronization, and information integration, perhaps utilizing a multimodal approach ( BIB005 , BIB006 ). As such, SL communication is highly complex and understanding it involves a substantial commonality with research in machine analysis and understanding of human action and behavior; for example, face and facial expression recognition , BIB008 , tracking and human motion analysis BIB007 , , and gesture recognition BIB003 . Detecting, tracking and identifying people, and interpreting human behavior are the capabilities required of pervasive computing and wearable devices in applications such as smart environments and perceptual user interfaces , BIB009 . These devices need to be context-aware, i.e., be able to determine their own context in relation to nearby objects and humans in order to respond appropriately without detailed instructions. Many of the problems and issues encountered in SL recognition are also encountered in the research areas mentioned above; the structured nature of SL makes it an ideal starting point for developing methods to solve these problems. Sign gestures are not all purely symbolic, and some are in fact mimetic or deictic (these are defined by Quek as act gestures where the movements performed relate directly to the intended interpretation). Mimetic gestures take the form of pantomimes and reflect some aspect of the object or activity that is being referred to. These are similar to classifier signs in American Sign Language (ASL) which can represent a particular object or person with the handshape and then act out the movements or actions of that object. Kendon BIB002 described one of the roles of hand gesticulations that accompany speech as providing images of the shapes of objects, spatial relations between objects or their paths of movement through space. These are in fact some of the same functions of classifier signs in ASL. A form of pantomime called constructed actions (role-playing or pespective shifting ) is also regularly used in SL discourse to relate stories about other people or places. Deictic or pointing gestures are extensively used in SL as pronouns or to specify an object or person who is present or to specify an absent person by pointing to a previously established referrant location. Hence, designing systems that can automatically recognize classifier signs, pointing gestures, and constructed actions in signing would be a step in the direction of analyzing gesticulation accompanying speech and other less structured gestures. SL gestures also offer a useful benchmark for evaluating hand/arm gesture recognition systems. Non-SL gesture recognition systems often deal with small, limited vocabularies which are defined to simplify the classification task. SL(s), on the other hand, are naturally developed languages as opposed to artificially defined ones and have large, well-defined vocabularies which include gestures that are difficult for recognition systems to disambiguate. One of the uses envisioned for SL recognition is in a signto-text/speech translation system. The complete translation system would additionally require machine translation from the recognized sequence of signs and NMS to the text or speech of a spoken language such as English. In an ideal system, the SL recognition module would have a large and general vocabulary, be able to capture and recognize manual information and NMS, perform accurately in realtime and robustly in arbitrary environments, and allow for maximum user mobility. Such a translation system is not the only use for SL recognition systems however, and other useful applications where the system requirements and constraints may be quite different, include the following: . Translation or complete dialog systems for use in specific transactional domains such as government offices, post offices, cafeterias, etc. , BIB015 , BIB010 , BIB004 . These systems may also serve as a user interface to PCs or information servers . Such systems could be useful even with limited vocabulary and formulaic phrases, and a constrained data input environment (perhaps using direct-measure device gloves BIB011 , BIB010 or colored gloves and constrained background for visual input ). . Bandwidth-conserving communication between signers through the use of avatars. Sign input data recognized at one end can be translated to a notational system (like HamNoSys) for transmission and synthesized into animation at the other end of the channel. This represents a great saving in bandwidth as compared to transmitting live video of a human signer. This concept is similar to a system for computer-generated signing developed under the Visicast project ( BIB014 ) where text content is translated to SiGML (Signing Gesture Markup Language, based on HamNoSys) to generate parameters for sign synthesis. Another possibility is creating SL documents for storage of recognized sign data in the form of sign notations, to be played back later through animation. . Automated or semiautomated annotation of video databases of native signing. Linguistic analyses of signed languages and gesticulations that accompany speech require large-scale linguistically annotated corpora. Manual transcription of such video data is time-consuming, and machine vision assisted annotation would greatly improve efficiency. Head tracking and handshape recognition algorithms BIB012 , and sign word boundary detection algorithms BIB013 have been applied for this purpose. . Input interface for augmentative communication systems. Assistive systems which are used for human-human communication by people with speech-impairments often require keyboard or joystick input from the user [14] . Gestural input involving some aspects of SL, like handshape for example, might be more user friendly. In the following, Section 2 gives a brief introduction to ASL, illustrating some aspects relevant to machine analysis. ASL is extensively used by the deaf communities of North America and is also one of the most well-researched among sign languages-by sign linguists as well as by researchers in machine recognition. In Section 3, we survey work related to automatic analysis of manual signing. Hand localization and tracking, and feature extraction in vision-based methods are considered in Sections 3.1 and 3.2, respectively. Classification schemes for sign gestures are considered in Section 3.3. These can be broadly divided into schemes that use a single classification stage or those that classify components of a gesture and then integrate them for sign classification. Section 3.3.1 considers classification methods employed to classify the whole sign or to classify its components. Section 3.3.2 considers methods that integrate component-level results for sign-level classification. Finally, Section 3.4 discusses the main issues involved in classification of sign gestures. Analysis of NMS is examined in Section 4. The issues are presented in Section 4.1 together with works on body pose and movement analysis, while works related to facial expression analysis, head pose, and motion analysis are examined in Appendix D (which can be found at www.computer.org/publications/dlib). The integration of these different cues is discussed in Section 4.2. Section 5 summarizes the state-of-the-art and future work, and Section 6 concludes the paper.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> AMERICAN SIGN LANGUAGE-ISSUES RELEVANT TO AUTOMATIC RECOGNITION <s> Hand and Mind: What Gestures Reveal about Thought. David McNeill. Chicago and London: University of Chicago Press, 1992. 416 pp. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> AMERICAN SIGN LANGUAGE-ISSUES RELEVANT TO AUTOMATIC RECOGNITION <s> We are currently developing a vision-based sign language recognition system for mobile use. This requires operability in different environments with a large range of possible users, ideally under arbitrary conditions. In this paper, the problem of finding relevant information in single-view image sequences is tackled. We discuss some issues in low level image cues and present an approach for the fast detection of a signing persons hands. This is achieved by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images. The approach is demonstrated with a watershed segmentation algorithm. <s> BIB002
Most research work in SL recognition has focused on classifying the lexical meaning of sign gestures. This is understandable since hand gestures do express the main information conveyed in signing. For example, from obser- BIB001 and (b) Quek's taxonomy . ving the hand gestures in the sequence of Fig. 2 , we can decipher the lexical meaning conveyed as "YOU STUDY." BIB002 However, without observing NMS and inflections in the signing, we cannot decipher the full meaning of the sentence as: "Are you studying very hard?" The query in the sentence is expressed by the body leaning forward, head thrust forward and raised eyebrows toward the end of the signed sequence (e.g., in Figs. 2e and 2f). To refer to an activity performed with great intensity, the lips are spread wide with the teeth visible and clenched; this co-occurs with the sign STUDY. In addition to information conveyed through these NMS, the hand gesture is performed repetitively in a circular contour with smooth motion. This continuous action further distinguishes the meaning as "studying" instead of "study." In the following sections, we will consider issues related to the lexical form of signs and point out some pertinent issues with respect to two important aspects of signing, viz; modifications to gestures that carry grammatical meaning, and NMS.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Manual Signing Expressing Lexical Meaning <s> Publisher Summary This chapter focuses on the internal structure of syllables in ASL, the language of deaf communities in the United States and most of Canada. The argument for ASL syllable structure is based primarily on distributional evidence for the distinction between the syllable nucleus and onsets and codas. The chapter explains the distribution of two phenomena—secondary movements and handshape changes—in strings of segments of the form, PMP, MP, PM, M, and P, where P is position and M is movement. Their distribution provides evidence for analyzing these five sign types as syllables. Each syllable has a nucleus. Those in PMP and PM have a P as onset, while those in PMP and MP have a P as coda. The way Ms and Ps are organized into syllables can be accounted for by positing a sign language analogue of the sonority hierarchy in which Ms are more sonorous than Ps. Sonority peaks are then syllable nuclei. This also provides evidence that sign language phonology has the analogue of vowels and consonants: Ms correspond to vowels and Ps to consonants. This follows from their relative sonority—from the fact that they play analogous roles in the organization of the phonological string into syllables. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Manual Signing Expressing Lexical Meaning <s> We are currently developing a vision-based sign language recognition system for mobile use. This requires operability in different environments with a large range of possible users, ideally under arbitrary conditions. In this paper, the problem of finding relevant information in single-view image sequences is tackled. We discuss some issues in low level image cues and present an approach for the fast detection of a signing persons hands. This is achieved by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images. The approach is demonstrated with a watershed segmentation algorithm. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Manual Signing Expressing Lexical Meaning <s> This paper has the ambitious goal of outlining the phonological structures and proc- esses we have analyzed in American Sign Language (ASL). In order to do this we have divided the paper into five parts. In section 1 we detail the types of sequential phenomena found in the production of individual signs, allowing us to argue that ASL signs are composed of sequences of phonological segments, just as are words in spoken languages. Section 2 provides the details of a segmental phonetic tran- scription system. Using the descriptions made available by the transcription system, Section 3 briefly discusses both paradigmatic and syntagmatic contrast in ASL signs. Section 4 deals with the various types of phonological processes at work in the language, processes remarkable in their similarity to phonological processes found in spoken languages. We conclude the paper with an overview of the major typed of phonological effects of ASL's rich system of morphological processes. We realize that the majority of readers will come to this paper with neither sign language proficiency nor a knowledge of sign language structure. As a result, many will encounter reference to ASL signs without knowing their form. Although we have been unable to illustrate all the examples, we hope we have provided sufficient illustra- tions to make the paper more accessible. <s> BIB003
Sign linguists generally distinguish the basic components (or phoneme subunits) of a sign gesture as consisting of the handshape, hand orientation, location, and movement. Handshape refers to the finger configuration, orientation to the direction in which the palm and fingers are pointing, and location to where the hand is placed relative to the body. Hand movement traces out a trajectory in space. The first phonological model, proposed by Stokoe , emphasized the simultaneous organization of these subunits. In contrast, Liddell and Johnson's Movement-Hold model BIB003 emphasized sequential organization. Movement segments were defined as periods during which some part of the sign is in transition, whether handshape, hand location, or orientation. Hold segments are brief periods when all these parts are static. More recent models ( , BIB001 , , ) aim to represent both the simultaneous and sequential structure of signs and it would seem that the computational framework adopted for SL recognition must similarly be able to model both structures. There are a limited number of subunits which combine to make up all the possible signs, for e.g., 30 handshapes, 8 hand orientations, 20 locations, and 40 movement trajectory shapes BIB003 (different numbers are proposed according to the phonological model adopted). Breaking down signs into their constituent parts has been used by various researchers for devising classification frameworks (Section 3.3.2). All parts are important as evidenced by the existence of minimal signs which differ in only one of the basic parts (Fig. 3a) . When signs occur in a continuous sequence to form sentences, the hand(s) need to move from the ending location of one sign to the starting location of the next. Simultaneously, the handshape and hand orientation also change from the ending handshape and orientation of one sign to the starting handshape and orientation of the next. These intersign transition periods are called movement epenthesis BIB003 and are not part of either of the signs. Fig. 2b shows a frame within the movement epenthesis-the right hand is transiting from performing the sign YOU to the sign STUDY. In continuous signing, processes with effects similar to co-articulation in speech do also occur, where the appearance of a sign is affected by the preceding and succeeding signs (e.g., hold deletion, metathesis, and assimilation ). However, these processes do not necessarily occur in all signs; for example, hold deletion is variably applied depending on whether the hold involves BIB002 . Words in capital letters are sign glosses which represent signs with their closest meaning in English. contact with a body part BIB003 . Hence, movement epenthesis occurs most frequently during continuous signing and should probably be tackled first by machine analysis, before dealing with the other phonological processes. Some aspects of signing impact the methods used for feature extraction and classification, especially for visionbased approaches. First, while performing a sign gesture, the hand may be required to be at different orientations with respect to the signer's body and, hence, a fixed hand orientation from a single viewpoint cannot be assumed. Second, different types of movements are involved in signing. Generally, movement refers to the whole hand tracing a global 3D trajectory, as in the sign STUDY of Fig. 2 where the hand moves in a circular trajectory. However, there are other signs which involve local movements only, such as changing the hand orientation by twisting the wrist (e.g., CHINESE and SOUR, Fig. 3b ) or moving the fingers only (e.g., COLOR). This imposes conflicting requirements on the field of view; it must be large enough to capture the global motion, but at the same time, small local movements must not be lost. Third, both hands often touch or occlude each other when observed from a single viewpoint and, in some signs, the hands partially occlude the face, as in the signs CHINESE, SOUR, and COLOR. Hence, occlusion handling is an important consideration.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> There are expressions using spatial relationships in sign language that are called directional verbs. To understand a sign-language sentence that includes a directional verb, it is necessary to analyze the spatial relationship between the recognized sign-language words and to find the proper combination of a directional verb and the sign-language words related to it. In this paper, we propose an analysis method for evaluatingthe spatial relationship between a directional verb and other sign-language words according to the distribution of the parameters representing the spatial relationship. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> A method for the representation, recognition, and interpretation of parameterized gesture is presented. By parameterized gesture we mean gestures that exhibit a systematic spatial variation; one example is a point gesture where the relevant parameter is the two-dimensional direction. Our approach is to extend the standard hidden Markov model method of gesture recognition by including a global parametric variation in the output probabilities of the HMM states. Using a linear model of dependence, we formulate an expectation-maximization (EM) method for training the parametric HMM. During testing, a similar EM algorithm simultaneously maximizes the output likelihood of the PHMM for the given sequence and estimates the quantifying parameters. Using visually derived and directly measured three-dimensional hand position measurements as input, we present results that demonstrate the recognition superiority of the PHMM over standard HMM techniques, as well as greater robustness in parameter estimation with respect to noise in the input features. Finally, we extend the PHMM to handle arbitrary smooth (nonlinear) dependencies. The nonlinear formulation requires the use of a generalized expectation-maximization (GEM) algorithm for both training and the simultaneous recognition of the gesture and estimation of the value of the parameter. We present results on a pointing gesture, where the nonlinear approach permits the natural spherical coordinate parameterization of pointing direction. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Grammatical Processes in Sign Gestures <s> Grammatical information conveyed through systematic temporal and spatial movement modifications is an integral aspect of sign language communication. We propose to model these systematic variations as simultaneous channels of information. Classification results at the channel level are output to Bayesian networks which recognize both the basic gesture meaning and the grammatical information (here referred to as layered meanings). With a simulated vocabulary of 6 basic signs and 5 possible layered meanings, test data for eight test subjects was recognized with 85.0% accuracy. We also adapt a system trained on three test subjects to recognize gesture data from a fourth person, based on a small set of adaptation data. We obtained gesture recognition accuracy of 88.5% which is a 75.7% reduction in error rate as compared to the unadopted system. <s> BIB005
The systematic changes to the sign appearance during continuous signing described above (addition of movement epenthesis, hold deletion, metathesis, assimilation) do not change the sign meaning. However, there are other systematic changes to one or more parts of the sign which affect the sign meaning, and these are briefly described in this section. In the sentence of Fig. 2 , the sign STUDY is inflected for temporal aspect. Here, the handshape, orientation, and location of the sign are basically the same as in its lexical form but the movement of the sign is modified to show how the action (STUDY) is performed with reference to time. Examples of other signs that can be inflected in this way are WRITE, SIT, and SICK (Klima and Bellugi lists 37 such signs). Fig. 4a shows examples of the sign ASK with different types of aspectual inflections. Generally, the meanings conveyed through these inflections are associated with aspects of the verbs that involve frequency, duration, recurrence, permanence, and intensity, and the sign's movement can be modified through its trajectory shape, rate, rhythm, and tension , . Klima Here, the verb indicates its subject and object by a change in the movement direction, with corresponding changes in its start and end locations, and hand orientation. Fig. 4b shows the sign ASK with different subject-object pairs. Other signs that can be similarly inflected include SHOW, GIVE, and INFORM (Padden lists 63 such verbs). These signs can also be inflected to show the number of persons in the subject and/or object, or show how the verb action is distributed with respect to the individuals participating in the action ( lists 10 different types of number agreement and distributional inflections, including dual, reciprocal, multiple, exhaustive, etc.). Verbs can be simultaneously inflected for person and number agreement. Other examples of grammatical processes which result in systematic variations in sign appearance include emphatic inflections, derivation of nouns from verbs, numerical incorporation, and compound signs. Emphatic inflections are used for the purpose of emphasis and are expressed through repetition in the sign's movement, with tension throughout. Appendix A (which can be found at www.computer.org/publications/dlib) has more details with illustrative photos and videos and discusses some implications for machine understanding. Classifier signs which can be constructed with innumerable variations are also discussed. Generally, there have been very few works that address inflectional and derivational processes that affect the spatial and temporal dimensions of sign appearance in systematic ways (as described in Section 2.2 and Appendix A at www.computer.org/publications/dlib). HMMs, which have been applied successfully to lexical sign recognition, are designed to tolerate variability in the timing of observation features which are the essence of temporal aspect inflections. The approach of mapping each isolated gesture sequence into a standard temporal length ( BIB003 , BIB004 ) causes loss of information on the movement dynamics. The few works that address grammatical processes in SL generally deal only with spatial variations. Sagawa and Takeuchi BIB001 deciphered the subject-object pairs of JSL verbs in sentences by learning the (Gaussian) probability densities of various spatial parameters of the verb's movement from training examples and, thus, calculated the probabilities of spatial parameters in test data. Six different sentences constructed from two verbs and three different subject-object pairs, which were tested on the same signer that provided the training set, was recognized with an average word accuracy of 93.4 percent. Braffort proposed an architecture where HMMs were employed for classifying lexical signs using all the features of the sign gesture (glove finger flexure values, tracker location and orientation), while verbs which can express person agreement were classified by their movement trajectory alone and classifier signs were classified by their finger flexure values only. Sentences comprising seven signs from the three different categories were successfully recognized with 92-96 percent word accuracy. They further proposed a rule-based interpreter module to establish the spatial relationship between the recognized signs, by maintaining a record of the sign articulations around the signing space. Although they were not applied to sign recognition, Parametric HMMs were proposed in BIB002 to estimate parameters representing systematic variations such as the distance between hands in a two-handed gesture and movement direction in a pointing gesture. However, it is unclear whether the method is suitable for larger vocabularies that exhibit multiple simultaneous variations. The works above only deal with a subset of possible spatial variations, with no straightforward extension to modeling systematic speed and timing variations. In Watanabe , however, both spatial size and speed information were extracted from two different musical conducting gestures with 90 percent success. This method first recognized the basic gesture using min/max points in the gesture trajectory and then measured the change in hand center-of-gravity between successive images to obtain gesture magnitude and speed information. In contrast, Ong and Ranganath BIB005 proposed an approach which simultaneously recognized the lexical meaning and the inflected meaning of gestures using Bayesian Networks. Temporal and spatial movement aspects that exhibit systematic variation (specifically movement size, direction, and speed profile) were categorized into distinct classes. Preliminary experimental results on classification of three motion trajectory shapes (straight line, arc, circle) and four types of systematic temporal and spatial modifications (increases in speed and/or size, even and uneven rhythms) often encountered in ASL yielded 85 percent accuracy for eight test subjects.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Nonmanual Signals-NMS <s> This paper describes a vision-based method for recognizing the nonmanual information in Japanese Sign Language (JSL). This new modality information provides grammatical constraints useful for JSL word segmentation and interpretation. Our attention is focused on head motion, the most dominant non-manual information in JSL. We designed an interactive color-modeling scheme for robust face detection. Two video cameras are vertically arranged to take the frontal and profile image of the JSL user, and head motions are classified into eleven patterns. Moment-based feature and statistical motion feature are adopted to represent these motion patterns. Classification of the motion features is performed with linear discrimant analysis method. Initial experimental results show that the method has good recognition rate and can be realized in real-time. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Nonmanual Signals-NMS <s> Conventions used in the text 1. Linguistics and sign linguistics 2. BSL in its social context 3. Constructing sign sentences 4. Questions and negation 5. Mouth patterns and non-manual features in BSL 6. Morphology and morphemes in BSL 7. Aspect, manner and mood 8. Space types and verb types in BSL 9. The structure of gestures and signs 10. Visual motivation and metaphor 11. The established and productive lexicons 12. Borrowing and naming signs 13. Socially unacceptable signs 14. Extended use of language in BSL Table of illustrations Index Index of signs Bibliography. <s> BIB002
In the example of Fig. 2 , two facial expressions were performed, with some overlap in their duration. Spreading the lips wide (Figs. 2c and 2d ) is an example of using lower facial expressions, which generally provide information about a particular sign through use of the mouth area (lips, tongue, teeth, cheek) , . In other examples, tongue through front teeth indicates that something is done carelessly, without paying attention; this can co-occur with a variety of signs like SHOP, DRIVING. Cheeks puffed out describes an object (e.g., TREE, TRUCK, MAN) as big or fat. The other facial expression shown in Fig. 2 depicts raised eyebrows and widened eyes (Figs. 2e and 2f) , and is an example of using upper face expressions ( , ), which often occur in tandem with head and body movements (in Figs. 2e and 2f the head and body are tilted forward). They generally convey information indicating emphasis on a sign or different sentence types (i.e., question, negation, rhetorical, assertion, etc.), and involve eye blinks, eye gaze direction, eyebrows, and nose. The eyebrows can be raised in surprise or to ask a question, contracted for emphasis or to show anger, or be drawn down in a frown. The head can tilt up with chin pressed forward, nod, shake or be thrust forward. The body can lean forward or back, shift and turn to either side. Please refer to Appendix A (www.computer. org/publications/dlib) for more examples of NMS. Although the description above has focused on ASL, similar use of NMS and grammatical processes occur in SL(s) of other countries, e.g., Japan BIB001 , Taiwan , Britain BIB002 , Australia , Italy , and France . SL communication uses two-handed gestures and NMS; understanding SL therefore involves solving problems that are common to other research areas and applications. This includes tracking of the hands, face and body parts, feature extraction, modeling and recognition of time-varying signals, multimodal integration of information, etc. Due to the interconnectedness of these areas, there is a vast literature available, but our intention here is to only provide an overview of research specific to SL recognition.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Recent research on model-based image coding for videotelephone and videoconferencing applications has mostly been concerned with head motion tracking and typically represents the human head as a 3D wire-frame model with texture-mapped surface features. However, the movements of the arms and hands are also important, particularly in sign language communication, and therefore should be included in the overall model. The paper describes a system which uses an articulated generalised cylindrical human model to track limb movements in a sequence of images. It outlines the closed-loop strategy developed to recognise and track human body motion and presents initial results for a complete implementation of the system. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present a prediction-and-verification segmentation scheme using attention images from multiple fixations. A major advantage of this scheme is that it can handle a large number of different deformable objects presented in complex backgrounds. The scheme is also relatively efficient. The system was tested to segment hands in sequences of intensity images, where each sequence represents a hand sign in American Sign Language. The experimental result showed a 95 percent correct segmentation rate with a 3 percent false rejection rate. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We describe a video-based analysis system for acquisition and classification of hand-arm motion concerning German sign language. These motions are recorded with a single video camera by use of a modular framegrabber system. Data acquisition as well as motion classification are performed in real-time. A colour coded glove and coloured markers at the elbow and shoulder are used. These markers are segmented from the recorded input images as a first step of image processing. Thereafter features of these coloured areas are calculated which are used for determining the 20 positions for each frame and hence the positions of hand and arm. The missing third dimension is derived from a geometric model of the human hand-arm system. The sequence of the position data is converted into a certain representation of motion. Motion is derived from rule-based classification of the performed gesture, which yields a recognition rate of 95%. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present an approach to continuous American sign language (ASL) recognition, which uses as input 3D data of arm motions. We use computer vision methods for 3D object shape and motion parameter extraction and an ascension technologies 'Flock of Birds' interchangeably to obtain accurate 3D movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for hidden Markov models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> The paper describes a real-time system which tracks the uncovered/unmarked hands of a person performing sign language. It extracts the face and hand regions using their skin colors, computes blobs and then tracks the location of each hand using a Kalman filter. The system has been tested for hand tracking using actual sign-language motion by native signers. The experimental results indicate that the system is capable of tracking hands even while they are overlapping the face. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper presents a sign language recognition system which consists of three modules: model-based hand tracking, feature extraction, and gesture recognition using a 3D Hopfield neural network (HNN). The first one uses the Hausdorff distance measure to track shape-variant hand motion, the second one applies the scale and rotation-invariant Fourier descriptor to characterize hand figures, and the last one performs a graph matching between the input gesture model and the stored models by using a 3D modified HNN to recognize the gesture. Our system tests 15 different hand gestures. The experimental results show that our system can achieve above 91% recognition rate, and the recognition process time is about 10 s. The major contribution in this paper is that we propose a 3D modified HNN for gesture recognition which is more reliable than the conventional methods. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Tracking interacting human body parts from a single two-dimensional view is difficult due to occlusion, ambiguity and spatio-temporal discontinuities. We present a Bayesian network method for this task. The method is not reliant upon spatio-temporal continuity, but exploits it when present. Our inferencebased tracking model is compared with a CONDENSATION model augmented with a probabilistic exclusion mechanism. We show that the Bayesian network has the advantages of fully modelling the state space, explicitly representing domain knowledge, and handling complex interactions between variables in a globally consistent and computationally effective manner. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We are currently developing a vision-based sign language recognition system for mobile use. This requires operability in different environments with a large range of possible users, ideally under arbitrary conditions. In this paper, the problem of finding relevant information in single-view image sequences is tackled. We discuss some issues in low level image cues and present an approach for the fast detection of a signing persons hands. This is achieved by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images. The approach is demonstrated with a watershed segmentation algorithm. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper introduces a model-based hand gesture recognition system, which consists of three phases: feature extraction, training, and recognition. In the feature extraction phase, a hybrid technique combines the spatial (edge) and the temporal (motion) information of each frame to extract the feature images. Then, in the training phase, we use the principal component analysis (PCA) to characterize spatial shape variations and the hidden Markov models (HMM) to describe the temporal shape variations. A modified Hausdorff distance measurement is also applied to measure the similarity between the feature images and the pre-stored PCA models. The similarity measures are referred to as the possible observations for each frame. Finally, in recognition phase, with the pre-trained PCA models and HMM, we can generate the observation patterns from the input sequences, and then apply the Viterbi algorithm to identify the gesture. In the experiments, we prove that our method can recognize 18 different continuous gestures effectively. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> We present a system for tracking the hands of a user in a frontal camera view for gesture recognition purposes. The system uses multiple cues, incorporates tracing and prediction algorithms, and applies probabilistic inference to determine the trajectories of the hands reliably even in case of hand-face overlap. A method for assessing tracking quality is also introduced. Tests were performed with image sequences of 152 signs from German Sign Language, which have been segmented manually beforehand to offer a basis for quantitative evaluation. A hit rate of 81.1% was achieved on this material. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> Abstract In this paper, we introduce a hand gesture recognition system to recognize continuous gesture before stationary background. The system consists of four modules: a real time hand tracking and extraction, feature extraction, hidden Markov model (HMM) training, and gesture recognition. First, we apply a real-time hand tracking and extraction algorithm to trace the moving hand and extract the hand region, then we use the Fourier descriptor (FD) to characterize spatial features and the motion analysis to characterize the temporal features. We combine the spatial and temporal features of the input image sequence as our feature vector. After having extracted the feature vectors, we apply HMMs to recognize the input gesture. The gesture to be recognized is separately scored against different HMMs. The model with the highest score indicates the corresponding gesture. In the experiments, we have tested our system to recognize 20 different gestures, and the recognizing rate is above 90%. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> The ability to detect a persons unconstrained hand in a natural video sequence has applications in sign language, gesture recognition and HCl. This paper presents a novel, unsupervised approach to training an efficient and robust detector which is capable of not only detecting the presence of human hands within an image but classifying the hand shape. A database of images is first clustered using a k-method clustering algorithm with a distance metric based upon shape context. From this, a tree structure of boosted cascades is constructed. The head of the tree provides a general hand detector while the individual branches of the tree classify a valid shape as belong to one of the predetermined clusters exemplified by an indicative hand shape. Preliminary experiments carried out showed that the approach boasts a promising 99.8% success rate on hand detection and 97.4% success at classification. Although we demonstrate the approach within the domain of hand shape it is equally applicable to other problems where both detection and classification are required for objects that display high variability in appearance. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Vision-Based Hand Localization and Tracking <s> This paper describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates. There are three key contributions. The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers [4]. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly discarded while spending more computation on promising object-like regions. A set of experiments in the domain of face detection are presented. The system yields face detection performance comparable to the best previous systems [16, 11, 14, 10, 1]. Implemented on a conventional desktop, face detection proceeds at 15 frames per second. Author email: fPaul.Viola,[email protected] c Compaq Computer Corporation, 2001 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of the Cambridge Research Laboratory of Compaq Computer Corporation in Cambridge, Massachusetts; an acknowledgment of the authors and individual contributors to the work; and all applicable portions of the copyright notice. Copying, reproducing, or republishing for any other purpose shall require a license with payment of fee to the Cambridge Research Laboratory. All rights reserved. CRL Technical reports are available on the CRL’s web page at http://crl.research.compaq.com. Compaq Computer Corporation Cambridge Research Laboratory One Cambridge Center Cambridge, Massachusetts 02142 USA <s> BIB024
In order to capture the whole signing space, the entire upper body needs to be in the camera's field-of-view (FOV). The hand(s) must be located in the image sequence and this is generally implemented by using color, motion, and/or edge information. If skin-color detection is used, the signer is often required to wear long-sleeved clothing, with restrictions on other skin-colored objects in the background ( BIB016 , BIB010 , BIB013 , BIB014 , BIB011 , BIB019 , BIB020 ). Skin-color detection was combined with motion cues in Akyol and Alvarado BIB016 , Imagawa et al. BIB010 , Yang et al. BIB019 , and combined with edge detection in Terrillon et al. . The hands were distinguished from the face with the assumption that the head is relatively static in BIB016 , BIB010 , BIB011 , and that the head region is bigger in size in BIB019 . A multilayer perceptron neural network-based frontal face detector was used in for the same purpose. Color cue has also been used in conjunction with colored gloves ( BIB007 , BIB017 , BIB021 , BIB004 , BIB005 ). Motion cues were used in BIB003 , BIB015 , BIB012 , BIB018 , with the assumption that the hand is the only moving object on a stationary background and that the signer's torso and head are relatively still. Another common requirement is that the hand must be constantly moving. In Chen et al. BIB022 and Huang and Jeng BIB018 , the hand was detected by logically ANDing difference images with edge maps and skin-color regions. In Cui and Weng's system BIB003 , BIB015 , an outline of the motion-detected hand was obtained by mapping partial views of the hand to previously learned hand contours, using a hierarchical nearest neighbor decision rule. This yielded 95 percent hand detection accuracy, but at a high computational cost (58.3s per frame). Ong and Bowden BIB023 detected hands with 99.8 percent accuracy in grey scale images with shape information alone, using a boosted cascade of classifiers BIB024 . Signers were constrained to wear long-sleeved dark clothing, in front of mostly dark backgrounds. Tanibata et al. extracted skin, clothes, head, and elbow region by using a very restrictive person-specific template that required the signer to be seated in a known initial position/pose. Some of the other works also localized the body torso ( BIB007 , BIB017 , BIB021 , BIB001 ), elbow and shoulder ( BIB006 ), along with the hands and face, using color cues and knowledge of the body's geometry. This allowed the position and movement of the hands to be referenced to the signer's body. Two-dimensional tracking can be performed using blobbased ( BIB010 , BIB011 , ), view-based ( BIB018 ), or hand contour/ boundary models ( BIB022 , BIB015 , BIB012 ), or by matching motion segmented regions ( BIB019 ). Particularly challenging is tracking in the presence of occlusion. Some works avoid the occurrence of occlusion entirely by their choice of camera angle ( BIB019 ), sign vocabulary ( BIB022 , BIB012 , BIB018 , ), or by having signs performed unnaturally so as to avoid occluding the face ( BIB015 ). In these and other works, the left hand and/or face may be excluded from the image FOV ( BIB022 , BIB012 , BIB018 , BIB001 , ). Another simplification is to use colored gloves, whereby face/hand overlap becomes straightforward to deal with. In the case of unadorned hands, simple methods for tracking and dealing with occlusions are generally unsatisfactory. For example, prediction techniques are used to estimate hand location based on the model dynamics and previously known locations, with the assumption of small, continuous hand movement ( BIB022 , BIB010 , BIB011 , , BIB019 ). Starner et al.'s BIB011 method of subtracting the (assumed static) face region from the merged face/hand blob can only handle small overlaps. Overlapping hands were detected, but, for simplicity, features extracted from the merged blob were assigned to both hands. In addition, the left/right hand labels were always assigned to the left and right-most hand blobs, respectively. Imagawa et al. BIB010 also had problems dealing with complex bimanual hand movements (crossing, overlapping and bouncing back) as Kalman filters were used for each hand without data association. Tracking accuracy of 82-97 percent was obtained in a lab situation but this degraded to as low as 74 percent for a published videotape with realistic signing at natural speed and NMS (this violated their assumptions of small hand movement between adjacent frames and a relatively static head). Their later work BIB013 dealt with face/hand overlaps by applying a sliding observation window over the merged blob and computing the likelihood of the window subimage belonging to one of the possible handshape classes. Hand location was correctly determined with 85 percent success rate. Tanibata et al. distinguished the hands and face in cases of overlap by using texture templates from previously found hand and face regions. This method was found to be unsatisfactory when the interframe change in handshape, face orientation, or facial expression was large. The more robust tracking methods that can deal with fast, discontinous hand motion, significant overlap, and complex hand interactions do not track the hands and face separately, but rather apply probabilistic reasoning for simultaneous assignment of labels to the possible hand/face regions BIB020 , BIB014 . In both these works, the assumption is that only the two largest skin-colored blobs other than the head could be hands (thus restricting other skin-colored objects in the background and requiring long-sleeved clothing). Zieren et al. BIB020 tracked (with 81.1 percent accuracy) both hands and face in video sequences of 152 German Sign Language (GSL) signs. Probabilistic reasoning using heuristic rules (based on multiple features such as relative positions of hands, sizes of skin-colored blobs, and Kalman filter prediction) was applied for labeling detected skin-colored blobs. Sherrah and Gong BIB014 demonstrated similarly good results while allowing head and body movement with the assumption that the head can be tracked reliably . Multiple cues (motion, color, orientation, size and shape of clusters, distance relative to other body parts) were used to infer blob identities with a Bayesian Network whose structure and node conditional probability distributions represented constraints of articulated body parts. In contrast to the above works which use 2D approaches, Downton and Drouet BIB002 used a 3D model-based approach where they built a hierarchical cylindrical model of the upper body, and implemented a project-and-match process with detected edges in the image to obtain kinematic parameters for the model. Their method failed to track after a few frames due to error propagation in the motion estimates. There are also a few works that use multiple cameras to obtain 3D measurements, however at great computational cost. Matsuo et al. BIB008 used stereo cameras to localize the hands in 3D and estimate the location of body parts. Vogler and Metaxas BIB009 placed three cameras orthogonally to overcome occlusion, and used deformable models for the arm/hand in each of the three camera views. With regard to background complexity, several works use uniform backgrounds ( BIB007 , BIB017 , BIB021 , BIB012 , BIB008 , BIB001 , BIB019 , BIB020 ). Even with nonuniform background, background subtraction was usually not used to segment out the signer. Instead, the methods focused on using various cues to directly locate the hands, face, or other body parts with simplifying constraints. In contrast, Chen et al. BIB022 used background modeling and subtraction to extract the foreground within which the hand was located. This eases some imaging restrictions and constraints; BIB022 did not require colored gloves and long-sleeved clothing, and allowed complex cluttered background that included moving objects. However, the hand was required to be constantly moving. The imaging restrictions and constraints encountered in vision-based approaches are listed in Table 1 .
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> I present a visual hand tracking system that can recover 3D hand shape and motion from a stream of 2D input images. The hand tracker was originally intended as part of a computer interface for (American) sign language signers, but the system may also serve as a general purpose hand tracking tool. In contrast to some previous 2D-to-sign approaches, I am taking the 3-dimensional nature of the signing process into account. My main objective was to create a versatile hand model and to design an algorithm that uses this model in an effective way to recover the 3D motion of the hand and fingers from 2D clues. The 2D clues are provided by colour-coded markers on the finger joints. The system then finds the 3D shape and motion of the hand by fitting a simple skeletonlike model to the joint locations found in the image. This fitting is done using a nonlinear, continuous optimization approach that gradually adjusts the pose of the model until correspondence with the image is reached. My present implementation of the tracker does not work in real time. However, it should be possible to achieve at least slow real-time tracking with appropriate hardware (a board for real-time image-capturing and colour-marker detection) and some code optimization. Such an 'upgraded7 version of the tracker might serve as a prototype for a Lcolour glove7 package providing a cheap and comfortable-though maybe less powerful-alternative to the data glove. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> !, Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion. It is more problematic to apply modelbased methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed. The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation. We argue that a model should only be able to deform in ways characteristic of the class of objects it represents. We describe a method for building models by learning patterns of variability from a training set of correctly annotated images. These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes). The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set. We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images. Q 199s A&& prrss, IN. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present a prediction-and-verification segmentation scheme using attention images from multiple fixations. A major advantage of this scheme is that it can handle a large number of different deformable objects presented in complex backgrounds. The scheme is also relatively efficient. The system was tested to segment hands in sequences of intensity images, where each sequence represents a hand sign in American Sign Language. The experimental result showed a 95 percent correct segmentation rate with a 3 percent false rejection rate. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We describe a video-based analysis system for acquisition and classification of hand-arm motion concerning German sign language. These motions are recorded with a single video camera by use of a modular framegrabber system. Data acquisition as well as motion classification are performed in real-time. A colour coded glove and coloured markers at the elbow and shoulder are used. These markers are segmented from the recorded input images as a first step of image processing. Thereafter features of these coloured areas are calculated which are used for determining the 20 positions for each frame and hence the positions of hand and arm. The missing third dimension is derived from a geometric model of the human hand-arm system. The sequence of the position data is converted into a certain representation of motion. Motion is derived from rule-based classification of the performed gesture, which yields a recognition rate of 95%. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This work presents a design for a human computer interface capable of recognizing 25 gestures from the international hand alphabet in real-time. Principal Component Analysis (PCA) is used to extract features from images of gestures. The features represent gesture images in terms of an optimal coordinate system, in which the classes of gestures make up clusters. The system is divided into two parts: an off-line and an on-line part. The feature selection and generation of a classifier is performed off-line. On-line the obtained features and the classifier are used to classify new and unknown gesture images in real-time. Results show that an overall off-line recognition rate averaging 99% on 1500 images is achieved when trained on 1000 other images. The on-line system runs at 14 frames per second. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> A new pattern matching method, the partly-hidden Markov model, is proposed for gesture recognition. The hidden Markov model, which is widely used for the time series pattern recognition, can deal with only piecewise stationary stochastic process. We solved this problem by introducing the modified second order Markov model, in which the first state is hidden and the second one is observable. As shown by the results of 6 sign-language recognition test, the error rate was improved by 73% compared with normal HMM. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present an approach to continuous American sign language (ASL) recognition, which uses as input 3D data of arm motions. We use computer vision methods for 3D object shape and motion parameter extraction and an ascension technologies 'Flock of Birds' interchangeably to obtain accurate 3D movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for hidden Markov models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper presents a sign language recognition system which consists of three modules: model-based hand tracking, feature extraction, and gesture recognition using a 3D Hopfield neural network (HNN). The first one uses the Hausdorff distance measure to track shape-variant hand motion, the second one applies the scale and rotation-invariant Fourier descriptor to characterize hand figures, and the last one performs a graph matching between the input gesture model and the stored models by using a 3D modified HNN to recognize the gesture. Our system tests 15 different hand gestures. The experimental results show that our system can achieve above 91% recognition rate, and the recognition process time is about 10 s. The major contribution in this paper is that we propose a 3D modified HNN for gesture recognition which is more reliable than the conventional methods. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> The paper describes a real-time system which tracks the uncovered/unmarked hands of a person performing sign language. It extracts the face and hand regions using their skin colors, computes blobs and then tracks the location of each hand using a Kalman filter. The system has been tested for hand tracking using actual sign-language motion by native signers. The experimental results indicate that the system is capable of tracking hands even while they are overlapping the face. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper presents a system for the recognition of sign language based on a theory of shape representation using size functions proposed by P. Frosini [5]. Our system consists of three modules: feature extraction, sign representation and sign recognition. The first performs an edge detection operation, the second uses size functions and inertia moments to represent hand signs, and the last uses a neural network to recognize hand gestures. Sign representation is an important step which we will deal with. Unlike previous work [15, 16], a new approach to the representation of hand gestures is proposed, based on size functions. Each sign is represented by means of a feature vector computed from a new pair of moment-based size functions. The work reported here indicates that moment-based size functions can be effectively used for the recognition of sign language even in the presence of shape changes due to differences in hands, position, style of signing, and viewpoint. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Since the human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research on view-independent object recognition. Due to the difficulties of the model-based approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of a small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Automatic gesture recognition systems generally require two separate processes: a motion sensing process where some motion features are extracted from the visual input; and a classification process where the features are recognised as gestures. We have developed the Hand Motion Understanding (HMU) system that uses the combination of a 3D model-based hand tracker for motion sensing and an adaptive fuzzy expert system for motion classification. The HMU system understands static and dynamic hand signs of the Australian Sign Language (Auslan). This paper presents the hand tracker that extracts 3D hand configuration data with 21 degrees-of-freedom (DOFs) from a 2D image sequence that is captured from a single viewpoint, with the aid of a colour-coded glove. Then the temporal sequence of 3D hand configurations detected by the tracker is recognised as a sign by an adaptive fuzzy expert system. The HMU system was evaluated with 22 static and dynamic signs. Before training the HMU system achieved 91% recognition, and after training it achieved over 95% recognition. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Tracking interacting human body parts from a single two-dimensional view is difficult due to occlusion, ambiguity and spatio-temporal discontinuities. We present a Bayesian network method for this task. The method is not reliant upon spatio-temporal continuity, but exploits it when present. Our inferencebased tracking model is compared with a CONDENSATION model augmented with a probabilistic exclusion mechanism. We show that the Bayesian network has the advantages of fully modelling the state space, explicitly representing domain knowledge, and handling complex interactions between variables in a globally consistent and computationally effective manner. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Hand gestures play an important role in communication between people during their daily lives. But the extensive use of hand gestures as a mean of communication can be found in sign languages. Sign language is the basic communication method between deaf people. A translator is usually needed when an ordinary person wants to communicate with a deaf one. The work presented in this paper aims at developing a system for automatic translation of gestures of the manual alphabets in the Arabic sign language. In doing so, we have designed a collection of ANFIS networks, each of which is trained to recognize one gesture. Our system does not rely on using any gloves or visual markings to accomplish the recognition job. Instead, it deals with images of bare hands, which allows the user to interact with the system in a natural way. An image of the hand gesture is processed and converted into a set of features that comprises of the lengths of some vectors which are selected to span the fingertips' region. The extracted features are rotation, scale, and translation invariat, which makes the system more flexible. The subtractive clustering algorithm and the least-squares estimator are used to identify the fuzzy inference system, and the training is achieved using the hybrid learning algorithm. Experiments revealed that our system was able to recognize the 30 Arabic manual alphabets with an accuracy of 93.55%. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> The accurate classification of hand gestures is crucial in the development of novel hand gesture-based systems designed for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC). A complete vision-based system, consisting of hand gesture acquisition, segmentation, filtering, representation and classification, is developed to robustly classify hand gestures. The algorithms in the subsystems are formulated or selected to optimality classify hand gestures. The gray-scale image of a hand gesture is segmented using a histogram thresholding algorithm. A morphological filtering approach is designed to effectively remove background and object noise in the segmented image. The contour of a gesture is represented by a localized contour sequence whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centered on the contour pixels. Gesture similarity is determined by measuring the similarity between the localized contour sequences of the gestures. Linear alignment and nonlinear alignment are developed to measure the similarity between the localized contour sequences. Experiments and evaluations on a subset of American Sign Language (ASL) hand gestures show that, by using nonlinear alignment, no gestures are misclassified by the system. Additionally, it is also estimated that real-time gesture classification is possible through the use of a high-speed PC, high-speed digital signal processing chips and code optimization. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB024 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper introduces a model-based hand gesture recognition system, which consists of three phases: feature extraction, training, and recognition. In the feature extraction phase, a hybrid technique combines the spatial (edge) and the temporal (motion) information of each frame to extract the feature images. Then, in the training phase, we use the principal component analysis (PCA) to characterize spatial shape variations and the hidden Markov models (HMM) to describe the temporal shape variations. A modified Hausdorff distance measurement is also applied to measure the similarity between the feature images and the pre-stored PCA models. The similarity measures are referred to as the possible observations for each frame. Finally, in recognition phase, with the pre-trained PCA models and HMM, we can generate the observation patterns from the input sequences, and then apply the Viterbi algorithm to identify the gesture. In the experiments, we prove that our method can recognize 18 different continuous gestures effectively. <s> BIB025 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We are currently developing a vision-based sign language recognition system for mobile use. This requires operability in different environments with a large range of possible users, ideally under arbitrary conditions. In this paper, the problem of finding relevant information in single-view image sequences is tackled. We discuss some issues in low level image cues and present an approach for the fast detection of a signing persons hands. This is achieved by using a modified generic skin color model combined with pixel level motion information, which is obtained from motion history images. The approach is demonstrated with a watershed segmentation algorithm. <s> BIB026 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Principle Component Analysis (PCA) and Multiple Discriminant Analysis (MDA) have long been used for the appearance-based hand posture recognition. In this paper, we propose a novel PCA/MDA scheme for hand posture recognition. Unlike other PCA/MDA schemes, the PCA layer acts as a crude classification. Since posture alone cannot provide sufficient discriminating information, each input pattern will be given a likelihood of being in the nodes of PCA layers, instead of a strict division. Based on the Expectation-Maximization (EM) algorithm, we introduce three methods to estimate the parameters for this crude classification during training. The experiments on a 110-sign vocabulary show a significant improvement compared with the global PCA/MDA. <s> BIB027 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB028 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB029 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Abstract We demonstrate that a small number of 2D linear statistical models are sufficient to capture the shape and appearance of a face from a wide range of viewpoints. Such models can be used to estimate head orientation and track faces through large angles. Given multiple images of the same face we can learn a coupled model describing the relationship between the frontal appearance and the profile of a face. This relationship can be used to predict new views of a face seen from one view and to constrain search algorithms which seek to locate a face in multiple views simultaneously. <s> BIB030 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> We present a system for tracking the hands of a user in a frontal camera view for gesture recognition purposes. The system uses multiple cues, incorporates tracing and prediction algorithms, and applies probabilistic inference to determine the trajectories of the hands reliably even in case of hand-face overlap. A method for assessing tracking quality is also introduced. Tests were performed with image sequences of 152 signs from German Sign Language, which have been segmented manually beforehand to offer a basis for quantitative evaluation. A hit rate of 81.1% was achieved on this material. <s> BIB031 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Feature Extraction and Parameter Estimation in the Vision-Based Approaches <s> Abstract In this paper, we introduce a hand gesture recognition system to recognize continuous gesture before stationary background. The system consists of four modules: a real time hand tracking and extraction, feature extraction, hidden Markov model (HMM) training, and gesture recognition. First, we apply a real-time hand tracking and extraction algorithm to trace the moving hand and extract the hand region, then we use the Fourier descriptor (FD) to characterize spatial features and the motion analysis to characterize the temporal features. We combine the spatial and temporal features of the input image sequence as our feature vector. After having extracted the feature vectors, we apply HMMs to recognize the input gesture. The gesture to be recognized is separately scored against different HMMs. The model with the highest score indicates the corresponding gesture. In the experiments, we have tested our system to recognize 20 different gestures, and the recognizing rate is above 90%. <s> BIB032
Research has focused on understanding hand signing in SL or, in the more restrictive case, classification of fingerspelled alphabets and numbers. For the former, the FOV includes the upper body of the signer, allowing the hands the range of movement required for signing. For fingerspelling, the range of hand motion is very small and consists mainly of finger configuration and orientation information. For full signing scenarios, features that characterize whole hand location and movement as well as appearance features that result from handshape and orientation are extracted, whereas for fingerspelling only the latter features are used. Thus, for works where the goal is classification of fingerspelling or handshape ( BIB022 , BIB008 , , BIB027 , BIB023 , BIB016 , BIB017 ), the entire FOV only contains the hand. In these works (with the exception of BIB017 ), the hand is generally restricted to palm facing the camera, against a uniform background. For full signing scenarios, a commonly extracted positional feature is the center-of-gravity of the hand blob. This can be measured in absolute image coordinates ( BIB013 ), relative to the face or body ( BIB009 , BIB024 , BIB028 , BIB010 , BIB001 , ), relative to the first gesture frame ( BIB018 ), or relative to the previous frame ( BIB010 ). Alternatively, motion features have been used to characterize hand motion, e.g., motion trajectories of hand pixels BIB029 or optical flow BIB032 . The above approaches extract measurements and features in 2D. In an effort to obtain 3D measurements, Hienz et al. BIB005 proposed a simple geometric model of the hand/arm to estimate the hand's distance to camera using the shoulder, elbow, and hand's 2D positions. Approaches which directly measure 3D position using multiple cameras provide better accuracy but at the cost of higher computational complexity. Matsuo et al.'s BIB011 stereo camera system found the 3D position of both hands in a body-centered coordinate frame. Volger and Metaxas' BIB012 orthogonal camera system extracted the 3D wrist position coordinates and orientation parameters relative to the signer's spine. The variety of hand appearance features include: segmented hand images, binary hand silhouettes or hand blobs, and hand contours. Segmented hand images are usually normalized for size, in-plane orientation, and/or illumination ( BIB018 , BIB006 ), and principal component analysis (PCA) is often applied for dimensionality reduction before further processing ( BIB008 , BIB027 , BIB019 , BIB017 ). In Starner et al. BIB013 and Tanibata et al. , geometric moments were calculated from the hand blob. Assan and Grobel BIB009 , Bauer and Kraiss BIB024 , BIB028 , calculated the sizes, distances, and angles between distinctly colored fingers, palm, and back of the hand. Contour-based representations include various translation, scale, and/or in-plane rotation invariant features such as, Fourier descriptors (FD) BIB032 , BIB014 , BIB007 , size functions BIB016 , the lengths of vectors from the hand centroid to the fingertips region BIB022 , and localized contour sequences BIB023 . Huang and Jeng BIB025 represented hand contours with Active Shape Models BIB003 , and extracted a modified Hausdorff distance measure between the prestored shape models and the hand contour in the input test image. Bowden and Sahardi used PCA on training hand contours, but constructed nonlinear Point Distribution Models by piecewise linear approximation with clusters. Hand contour tracking was applied on a fingerspelling video sequence, and the model transited between clusters with probabilities that reflected information about shape space and alphabet probabilities in English. Though contour-based representations use invariant features, they may generally suffer from ambiguities resulting from different handshapes with similar contours. All of the above methods extracted 2D hand appearance features. In contrast, Holden and Owens BIB020 and Dorner BIB002 employed a 3D model-based approach to estimate finger joint angles and 3D hand orientation. In both works, finger joints and wrist were marked with distinct colors, and a 3D hand model was iteratively matched to the image content by comparing the projections of the hand model's joints with the corresponding joint markers detected in the image. Holden and Owens BIB020 could deal with missing markers due to the hand's self-occlusion by Kalman filter prediction. However, hand orientation was restricted to palm facing the camera. Dorner BIB002 estimated the hand model state based on constraints on the possible range of joint angles and state transitions, to successfully track in presence of out-of-plane rotations. However, processing speed was quite slow, requiring 5-6s per frame. In these and other works using 3D hand models ( , ), the image FOV is assumed to contain only the hand with high resolution. In a sign recognition system however, the image FOV would contain the entire upper body; hence, the hand size would be small. In addition, these works do not consider situations when the hand is partially occluded (for example, by the other hand). Fillbrandt et al. attempt to address the shortcomings of the above approaches which directly find correspondence between image features and the 3D hand model. They used a network of 2D Active Appearance Models BIB030 as an intermediate representation between image features, and a simplified 3D hand model with 9 degrees-of-freedom. Experimental results with high-resolution images of the hand against uniform background yielded an average error of 10 percent in estimating finger parameters, while error for estimating the 3D hand orientation was 10 -20 . The system ran at 4 fps on a 1GHz Pentium III and they obtained some good results with low resolution images and partly missing image information. However, further work is needed before the model can be applied to a natural signing environment. In terms of processing speed, methods that operate at near real-time for tracking and/or feature extraction (roughly 4-16 fps) include BIB026 , BIB009 , BIB024 , BIB028 , BIB005 , BIB015 , BIB013 , BIB031 . Some of the other methods were particularly slow, for example: 1.6s per frame (PII-330M) for tracking in Sherrah and Gong BIB021 , several seconds per frame for feature extraction in Tamura and Kawasaki BIB001 , 58.3s per frame (SGI INDIGO 2 workstation) for hand segmentation in Cui and Weng BIB004 , 60s for hand segmentation, and 70s for feature estimation in Huang and Jeng BIB025 . Direct-measure devices use trackers to directly measure the 3D position and orientation of the hand(s), and gloves to measure finger joint angles. More details on feature estimation from direct-measure devices can be found in Appendix C (www.computer.org/publications/dlib).
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A gesture recognition method for Japanese sign language is presented. We have developed a posture recognition system using neural networks which could recognize a finger alphabet of 42 symbols. We then developed a gesture recognition system where each gesture specifies a word. Gesture recognition is more difficult than posture recognition because it has to handle dynamic processes. To deal with dynamic processes we use a recurrent neural network. Here, we describe a gesture recognition method which can recognize continuous gesture. We then discuss the results of our research. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A supervised learning neural network classifier that utilizes fuzzy sets as pattern classes is described. Each fuzzy set is an aggregate (union) of fuzzy set hyperboxes. A fuzzy set hyperbox is an n-dimensional box defined by a min point and a max point with a corresponding membership function. The min-max points are determined using the fuzzy min-max learning algorithm, an expansionxontraction process that can learn nonlinear class boundaries in a single pass through the data and provides the ability to incorporate new and refine existing classes without retraining. The use of a fuzzy set approach to pattern classification inherently provides degree of membership information that is extremely useful in higher level decision mak- ing. This paper will describe the relationship between fuzzy sets and pattern classification. It explains the fuzzy min-max classifier neural network implementation, it outlines the learning and recall algorithms, and it provides several examples of operation that demonstrate the strong qualities of this new neural network classifier. AmRN classification is a key element to many engi- P neering solutions. Sonar, radar, seismic, and diagnostic applications all require the ability to accurately classify a situation. Control, tracking, and prediction systems will often use classifiers to determine input-output relationships. Because of this wide range of applicability, pattern classification has been studied a great deal (13), (15), (19). This paper describes a neural network classifier that creates classes by aggregating several smaller fuzzy sets into a single fuzzy set class. This technique, introduced in (42) as an extension of earlier work (41), can learn pattern classes in a single pass through the data, it can add new pattern classes on the fly, it can refine existing pattern classes as new information is received, and it uses simple operations that allow for quick execution. Fuzzy min-max classification neural networks are built using hyperbox fuzzy sets. A hyperbox defines a region of the n-dimensional pattern space that has patterns with full class membership. A hyperbox is completely defined by its min point and its max point, and a membership function is defined with respect to these hyperbox min-max points. The min-max (hyperbox) membership function combination defines a fuzzy set, hyperbox fuzzy sets are aggregated to form a single fuzzy set class, and the resulting structure fits naturally into a neural network framework; hence this classification system is called a fuzzy min-max classification neural network. Learning in the fuzzy min-max classification neural network is performed by properly placing and adjusting hyperboxes in the pattern space. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The architecture and learning procedure underlying ANFIS (adaptive-network-based fuzzy inference system) is presented, which is a fuzzy inference system implemented in the framework of adaptive networks. By using a hybrid learning procedure, the proposed ANFIS can construct an input-output mapping based on both human knowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs. In the simulation, the ANFIS architecture is employed to model nonlinear functions, identify nonlinear components on-line in a control system, and predict a chaotic time series, all yielding remarkable results. Comparisons with artificial neural networks and earlier work on fuzzy modeling are listed and discussed. Other extensions of the proposed ANFIS and promising applications to automatic control and signal processing are also suggested. > <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlove mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger in relation to the metacarpophalangeal joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling. > <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We explore recognition implications of understanding gesture communication, having chosen American sign language as an example of a gesture language. An instrumented glove and specially developed software have been used for data collection and labeling. We address the problem of recognizing dynamic signing, i.e. signing performed at natural speed. Two neural network architectures have been used for recognition of different types of finger-spelled sentences. Experimental results are presented suggesting that two features of signing affect recognition accuracy: signing frequency which to a large extent can be accounted for by training a network on the samples of the respective frequency; and coarticulation effect which a network fails to identify. As a possible solution to coarticulation problem two post-processing algorithms for temporal segmentation are proposed and experimentally evaluated. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The sign language is a method of communication for the deaf-mute. Articulated gestures and postures of hands and fingers are commonly used for the sign language. This paper presents a system which recognizes the Korean sign language (KSL) and translates into a normal Korean text. A pair of data-gloves are used as the sensing device for detecting motions of hands and fingers. For efficient recognition of gestures and postures, a technique of efficient classification of motions is proposed and a fuzzy min-max neural network is adopted for on-line pattern recognition. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We describe a video-based analysis system for acquisition and classification of hand-arm motion concerning German sign language. These motions are recorded with a single video camera by use of a modular framegrabber system. Data acquisition as well as motion classification are performed in real-time. A colour coded glove and coloured markers at the elbow and shoulder are used. These markers are segmented from the recorded input images as a first step of image processing. Thereafter features of these coloured areas are calculated which are used for determining the 20 positions for each frame and hence the positions of hand and arm. The missing third dimension is derived from a geometric model of the human hand-arm system. The sequence of the position data is converted into a certain representation of motion. Motion is derived from rule-based classification of the performed gesture, which yields a recognition rate of 95%. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A new pattern matching method, the partly-hidden Markov model, is proposed for gesture recognition. The hidden Markov model, which is widely used for the time series pattern recognition, can deal with only piecewise stationary stochastic process. We solved this problem by introducing the modified second order Markov model, in which the first state is hidden and the second one is observable. As shown by the results of 6 sign-language recognition test, the error rate was improved by 73% compared with normal HMM. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This work presents a design for a human computer interface capable of recognizing 25 gestures from the international hand alphabet in real-time. Principal Component Analysis (PCA) is used to extract features from images of gestures. The features represent gesture images in terms of an optimal coordinate system, in which the classes of gestures make up clusters. The system is divided into two parts: an off-line and an on-line part. The feature selection and generation of a classifier is performed off-line. On-line the obtained features and the classifier are used to classify new and unknown gesture images in real-time. Results show that an overall off-line recognition rate averaging 99% on 1500 images is achieved when trained on 1000 other images. The on-line system runs at 14 frames per second. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Reducing or eliminating statistical redundancy between the components of high-dimensional vector data enables a lower-dimensional representation without significant loss of information. Recognizing the limitations of principal component analysis (PCA), researchers in the statistics and neural network communities have developed nonlinear extensions of PCA. This article develops a local linear approach to dimension reduction that provides accurate representations and is fast to compute. We exercise the algorithms on speech and image data, and compare performance with PCA and with neural network implementations of nonlinear PCA. We find that both nonlinear techniques can provide more accurate representations than PCA and show that the local linear techniques outperform neural network implementations. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper presents a sign language recognition system which consists of three modules: model-based hand tracking, feature extraction, and gesture recognition using a 3D Hopfield neural network (HNN). The first one uses the Hausdorff distance measure to track shape-variant hand motion, the second one applies the scale and rotation-invariant Fourier descriptor to characterize hand figures, and the last one performs a graph matching between the input gesture model and the stored models by using a 3D modified HNN to recognize the gesture. Our system tests 15 different hand gestures. The experimental results show that our system can achieve above 91% recognition rate, and the recognition process time is about 10 s. The major contribution in this paper is that we propose a 3D modified HNN for gesture recognition which is more reliable than the conventional methods. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper presents a system for the recognition of sign language based on a theory of shape representation using size functions proposed by P. Frosini [5]. Our system consists of three modules: feature extraction, sign representation and sign recognition. The first performs an edge detection operation, the second uses size functions and inertia moments to represent hand signs, and the last uses a neural network to recognize hand gestures. Sign representation is an important step which we will deal with. Unlike previous work [15, 16], a new approach to the representation of hand gestures is proposed, based on size functions. Each sign is represented by means of a feature vector computed from a new pair of moment-based size functions. The work reported here indicates that moment-based size functions can be effectively used for the recognition of sign language even in the presence of shape changes due to differences in hands, position, style of signing, and viewpoint. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Gesture based applications widely range from replacing the traditional mouse as a position device to virtual reality and communication with the deaf. The article presents a fuzzy rule based approach to spatio-temporal hand gesture recognition. This approach employs a powerful method based on hyperrectangutar composite neural networks (HRCNNs) for selecting templates. Templates for each hand shape are represented in the form of crisp IF-THEN rules that are extracted from the values of synaptic weights of the corresponding trained HRCNNs. Each crisp IF-THEN rule is then fuzzified by employing a special membership function in order to represent the degree to which a pattern is similar to the corresponding antecedent part. When an unknown gesture is to be classified, each sample of the unknown gesture is tested by each fuzzy rule. The accumulated similarity associated with all samples of the input is computed for each hand gesture in the vocabulary, and the unknown gesture is classified as the gesture yielding the highest accumulative similarity. Based on the method we can implement a small-sized dynamic hand gesture recognition system. Two databases which consisted of 90 spatio-temporal hand gestures are utilized for verifying its performance. An encouraging experimental result confirms the effectiveness of the proposed method. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Since the human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research on view-independent object recognition. Due to the difficulties of the model-based approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of a small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Sign language is the language used by the deaf, which is a comparatively steadier expressive system composed of signs corresponding to postures and motions assisted by facial expression. The objective of sign language recognition research is to "see" the language of deaf. The integration of sign language recognition and sign language synthesis jointly comprise a "human-computer sign language interpreter", which facilitates the interaction between deaf and their surroundings. Considering the speed and performance of the recognition system, Cyberglove is selected as gesture input device in our sign language recognition system, Semi-Continuous Dynamic Gaussian Mixture Model (SCDGMM) is used as recognition technique, and a search scheme based on relative entropy is proposed and is applied to SCDGMM-based sign word recognition. Comparing with SCDGMM recognizer without searching scheme, the recognition time of SCDGMM recognizer with searching scheme reduces almost 15 times. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> In this paper 3-layer feedforward network is introduced to recognize Chinese manual alphabet, and Single Parameter Dynamic Search Algorithm(SPDS) is used to learn net parameters. In addition, a recognition algorithm for recognizing manual alphabets based on multi-features and multi-classifiers is proposed to promote the recognition performance of finger-spelling. From experiment result, it is shown that Chinese finger-spelling recognition based on multi-features and multi-classifiers outperforms its recognition based on single-classifier. <s> BIB024 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Hand gestures play an important role in communication between people during their daily lives. But the extensive use of hand gestures as a mean of communication can be found in sign languages. Sign language is the basic communication method between deaf people. A translator is usually needed when an ordinary person wants to communicate with a deaf one. The work presented in this paper aims at developing a system for automatic translation of gestures of the manual alphabets in the Arabic sign language. In doing so, we have designed a collection of ANFIS networks, each of which is trained to recognize one gesture. Our system does not rely on using any gloves or visual markings to accomplish the recognition job. Instead, it deals with images of bare hands, which allows the user to interact with the system in a natural way. An image of the hand gesture is processed and converted into a set of features that comprises of the lengths of some vectors which are selected to span the fingertips' region. The extracted features are rotation, scale, and translation invariat, which makes the system more flexible. The subtractive clustering algorithm and the least-squares estimator are used to identify the fuzzy inference system, and the training is achieved using the hybrid learning algorithm. Experiments revealed that our system was able to recognize the 30 Arabic manual alphabets with an accuracy of 93.55%. <s> BIB025 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB026 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A divide-and-conquer approach is presented for signer-independent continuous Chinese Sign Language(CSL) recognition in this paper. The problem of continuous CSL recognition is divided into the subproblems of isolated CSL recognition. The simple recurrent network(SRN) and the hidden Markov models(HMM) are combined in this approach. The improved SRN is introduced for segmentation of continuous CSL. Outputs of SRN are regarded as the states of HMM, and the Lattice Viterbi algorithm is employed to search the best word sequence in the HMM framework. Experimental results show SRN/HMM approach has better performance than the standard HMM one. <s> BIB027 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The accurate classification of hand gestures is crucial in the development of novel hand gesture-based systems designed for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC). A complete vision-based system, consisting of hand gesture acquisition, segmentation, filtering, representation and classification, is developed to robustly classify hand gestures. The algorithms in the subsystems are formulated or selected to optimality classify hand gestures. The gray-scale image of a hand gesture is segmented using a histogram thresholding algorithm. A morphological filtering approach is designed to effectively remove background and object noise in the segmented image. The contour of a gesture is represented by a localized contour sequence whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centered on the contour pixels. Gesture similarity is determined by measuring the similarity between the localized contour sequences of the gestures. Linear alignment and nonlinear alignment are developed to measure the similarity between the localized contour sequences. Experiments and evaluations on a subset of American Sign Language (ASL) hand gestures show that, by using nonlinear alignment, no gestures are misclassified by the system. Additionally, it is also estimated that real-time gesture classification is possible through the use of a high-speed PC, high-speed digital signal processing chips and code optimization. <s> BIB028 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB029 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB030 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Hitherto, the major challenge to sign language recognition is how to develop approaches that scale well with increasing vocabulary size. We present an approach to large vocabulary, continuous Chinese sign language (CSL) recognition that uses phonemes instead of whole signs as the basic units. Since the number of phonemes is limited, HMM-based training and recognition of the CSL signal becomes more tractable and has the potential to recognize enlarged vocabularies. Furthermore, the proposed method facilitates the CSL recognition when the finger-alphabet is blended with gestures. About 2400 phonemes are defined for CSL. One HMM is built for each phoneme, and then the signs are encoded based on these phonemes. A decoder that uses a tree-structured network is presented. Clustering of the Gaussians on the states, the language model and N-best-pass is used to improve the performance of the system. Experiments on a 5119 sign vocabulary are carried out, and the result is exciting. <s> BIB031 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> A new method to recognize continuous sign language based on hidden Markov model is proposed. According to the dependence of linguistic context, connections between elementary subwords are classified as strong connection and weak connection. The recognition of strong connection is accomplished with the aid of subword trees, which describe the connection of subwords in each sign language word. In weak connection, the main problem is how to extract the best matched subwords and find their end-points with little help of context information. The proposed method improves the summing process of the Viterbi decoding algorithm which is constrained in every individual model, and compares the end score at each frame to find the ending frame of a subword. Experimental results show an accuracy of 70% for continuous sign sentences that comprise no more than 4 subwords. <s> BIB032 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Principle Component Analysis (PCA) and Multiple Discriminant Analysis (MDA) have long been used for the appearance-based hand posture recognition. In this paper, we propose a novel PCA/MDA scheme for hand posture recognition. Unlike other PCA/MDA schemes, the PCA layer acts as a crude classification. Since posture alone cannot provide sufficient discriminating information, each input pattern will be given a likelihood of being in the nodes of PCA layers, instead of a strict division. Based on the Expectation-Maximization (EM) algorithm, we introduce three methods to estimate the parameters for this crude classification during training. The experiments on a 110-sign vocabulary show a significant improvement compared with the global PCA/MDA. <s> BIB033 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> The paper presents a portable system and method for recognizing the 26 hand shapes of the American Sign Language alphabet, using a novel glove-like device. Two additional signs, 'space', and 'enter' are added to the alphabet to allow the user to form words or phrases and send them to a speech synthesizer. Since the hand shape for a letter varies from one signer to another, this is a 28-class pattern recognition system. A three-level hierarchical classifier divides the problem into "dispatchers" and "recognizers." After reducing pattern dimension from ten to three, the projection of class distributions onto horizontal planes makes it possible to apply simple linear discrimination in 2D, and Bayes' Rule in those cases where classes had features with overlapped distributions. Twenty-one out of 26 letters were recognized with 100% accuracy; the worst case, letter U, achieved 78%. <s> BIB034 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> Inspired by the Defense Advanced Research Projects Agency's (DARPA) previous successes in speech recognition, we introduce a new task for sign language recognition research: a mobile one-way American sign language translator. We argue that such a device should be feasible in the next few years, may provide immediate practical benefits for the deaf community, and leads to a sustainable program of research comparable to early speech recognition efforts. We ground our efforts in a particular scenario, that of a deaf individual seeking an apartment and discuss the system requirements and our interface for this scenario. Finally, we describe initial recognition results of 94% accuracy on a 141 sign vocabulary signed in phrases of fours signs using a one-handed glove-based system and hidden Markov models (HMMs). <s> BIB035 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This work presents a hieraarchical approach to recogniz isolated 3-D hand gesture trajectories for signing exact English (SEE). SEE hand gestures can be periodic as well as non-periodic. We first differentiate between periodic and non-periodic gestures followed by recognition of individual gestures. After periodicity detection, non-periodic trajectories are classified into 8 classes and periodic trajectories are classified into 4 classes. A Polhemus tracker is used to provide the input data. Periodicity detection is based on Fourier analysis and hand trajectories are recognized by vector quantization principal component analysis (VQPCA). The average periodicity detection accuracy is 95.9%. The average recognition rates with VQPCA for non-periodic and periodic gestures are 97.3% and 97.0% respectively. In comparison, k-means clustering yielded 87.0% and 85.1%, respectively. <s> BIB036 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Classification Methods <s> This work discusses an approach for capturing and translating isolated gestures of American Sign Language into spoken and written words. The instrumented part of the system combines an AcceleGlove and a two-link arm skeleton. Gestures of the American Sign Language are broken down into unique sequences of phonemes called poses and movements, recognized by software modules trained and tested independently on volunteers with different hand sizes and signing ability. Recognition rates of independent modules reached up to 100% for 42 postures, orientations, 11 locations and 7 movements using linear classification. The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy. The system proved to be scalable: when the lexicon was extended to 176 signs and tested without retraining, the accuracy was 95%. This represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs). <s> BIB037
Neural Networks and Variants. Multilayer perceptrons (MLP) are often employed for classifying handshape ( BIB005 , BIB018 , BIB017 , BIB001 , BIB013 , BIB004 , BIB024 ). Waldron and Kim BIB004 , and Vamplew and Adams BIB013 additionally used MLPs to classify the hand location, orientation, and movement type from tracker data (see Fig. 5a ). Other neural network (NN) variants include: Fuzzy Min-Max NNs ( BIB002 ) in BIB006 , Adaptive Neuro-Fuzzy Inference System Networks ( BIB003 ) in BIB025 , and Hyperrectangular Composite NNs in BIB019 , all for handshape classification; and 3D Hopfield NN in BIB014 for sign classification. Time-series data, such as movement trajectories and sign gestures, consist of many data points and have variable temporal lengths. NNs designed for classifying static data often do not utilize all the information available in the data points. For example, in classifying movement type, BIB004 used the displacement vectors at the start and midpoint of a gesture as input to the MLP, while BIB013 used only the accumulated displacement in each of the three primary axes of the tracker. Yang et al. BIB029 used Time-Delay NNs which were designed for temporal processing, to classify signs from hand pixel motion trajectories. As a small moving window of gesture data from consecutive time frames is used as input, only a small number of weights need to be trained (in contrast, HMMs often require estimation of many model parameters). The input data window eventually covers all the data points in the sequence, but a standard temporal length is still required. Murakami and Taguchi BIB001 used Recurrent NNs which can take into account temporal context without requiring a fixed temporal length. They considered a sign word to be recognized when the output node values remain unchanged over a heuristically determined period of time. Hidden Markov models (HMMs) and variants. Several works classify sign gestures using HMMs which are widely used in continuous speech recognition. HMMs are able to process time-series data with variable temporal lengths and discount timing variations through the use of skipped-states and same-state transitions. HMMs can also implicitly segment continuous speech into individual words-trained word or phoneme HMMs are chained together into a branching tree-structured network and Viterbi decoding is used to find the most probable path through the network, thereby recovering both the word boundaries and the sequence. This idea has also been used for recognition of continuous signs, using various techniques to increase computational efficiency (some of which originated in speech recognition research ). These techniques include language modeling, beam search and network pruning ( BIB026 , BIB030 , BIB018 , BIB031 ), N-best pass ( BIB031 ), fast matching ( BIB018 ), frame predicting ( BIB018 ), and clustering of Gaussians ( BIB031 ). Language models that have been used include unigram and bigram models in BIB018 , , BIB031 , as well as a strongly constrained parts-of-speech grammar in BIB035 , BIB015 . As an alternative to the tree-structured network approach, Liang and Ouhyoung BIB016 and Fang et al. BIB027 explicitly segmented sentences before classification by HMMs (Section 3.4.1). To reduce training data and enable scaling to large vocabularies, some researchers define sequential subunits, similar to phonetic acoustic models in speech, making every sign a concatentation of HMMs which model subunits. Based on an unsupervised method similar to one employed in speech recognition ( ), Bauer and Kraiss BIB026 defined 10 subunits for a vocabulary of 12 signs using k-means clustering. Later, a bootstrap method BIB030 was introduced to get initial estimates for subunit HMM parameters and obtain the sign transcriptions. Recognition accuracy on 100 isolated signs using 150 HMM subunits was 92.5 percent. Encouragingly, recognition accuracy of 50 new signs without retraining the subunit HMMs was 81.0 percent. Vogler (Fig. 6a) , Yuan et al. BIB032 and Wang et al. BIB031 defined subunits linguistically instead of using unsupervised learning. BIB031 achieved 86.2 percent word accuracy in continuous sign recognition for a large vocabulary of 5,119 signs with 2,439 subunit HMMs. Fig. 6b ( BIB031 ) shows a tree structure built from these subunits to form sign words. Kobayashi and Haruyama BIB009 argue that HMMs, which are meant to model piecewise stationary processes, are illsuited for modeling gesture features which are always transient and propose the Partly hidden Markov model. Here the observation node probability is dependent on two states, one hidden and the other observable. Experimental results for isolated sign recognition showed a 73 percent improvement in error rate over HMMs. However, the vocabulary set of six Japanese Sign Language (JSL) signs is too small to draw concrete conclusions. Principal Component Analysis (PCA) and Multiple Discriminant Analysis (MDA). Birk et al. BIB010 and Imagawa et al. BIB020 both reduced dimensionality of segmented hand images by PCA before classification. Imagawa et al. BIB020 applied an unsupervised approach where training images were clustered in eigenspace and test images were classified to the cluster identity which gave the maximum likelihood score. Kong and Ranganath BIB036 classified 11 3D movement trajectories by performing periodicity detection using Fourier analysis, followed by Vector Quantization Principal Component Analysis BIB011 . Cui and Weng BIB021 used a recursive partition tree and applied PCA and MDA operations at each node. This method was able to achieve nonlinear classification boundaries in the feature space of 28 ASL signs. Deng and Tsui BIB033 found that when the entire data set is used for MDA, the performance degrades with increasing number of classes. To overcome this and to avoid strict division of data into partitions (as in BIB021 ), they applied PCA and then performed crude classification into clusters with Gaussian distributions before applying MDA locally. The final classification of an input vector into one of 110 ASL signs took into account the likelihood of being in each of the clusters. Wu and Huang BIB022 aimed to overcome the difficulty of getting good results from MDA without a large labeled training data set. A small labeled data set and a large unlabeled data set were both modeled by the same mixture density, and a modified Discriminant-EM algorithm was used to estimate the mixture density parameters. A classifier trained with 10,000 unlabeled samples and 140 labeled samples of segmented hand images classified 14 handshapes with 92.4 percent accuracy, including test images where the hands had significant outof-plane rotations. The above works mainly dealt with handshape classification ( BIB010 , BIB022 ) or classification of signs based on just the beginning and ending handshape ( BIB033 , BIB020 ). In BIB036 and BIB021 which classified movement trajectory and signs, respectively, mapping to a fixed temporal length was required. Other methods. Some of the other methods that have been applied for classification of handshape are: decision trees ( BIB034 , BIB037 ), nearest-neighbor matching ( ), image template matching ( BIB028 , BIB007 ), and correlation with phase-only filters from discrete Fourier transforms ( ). Rule-based methods based on dictionary entries or decision trees have also been applied to classifying motion trajectories or signs ( BIB008 , , , BIB006 , BIB012 , ). Classification is by template matching with the ideal sequence of motion directions, or finding features (like concativity, change in direction) that characterize each motion type. The rules are usually handcoded and, thus, may not generalize well. Wu and Gao BIB023 presented the Semicontinuous Dynamic Gaussian Mixture Model as an alternative to HMMs for processing temporal data, with the advantage of faster training time and fewer model parameters. This model was applied to recognizing sign words from a vocabulary of 274, but only using finger joint angle data (from two Cybergloves). They achieved fast recognition (0.04s per sign) and 97.4 percent accuracy.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> Conventional fuzzy control systems using PID (proportional-integral-derivative) control and their limitations are discussed. Ways to incorporate adaptivity are examined. The functioning of adaptive fuzzy logic and adaptive fuzzy control systems is described. The use of rule weights is explained. > <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> The design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlove mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger in relation to the metacarpophalangeal joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling. > <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> The sign language is a method of communication for the deaf-mute. Articulated gestures and postures of hands and fingers are commonly used for the sign language. This paper presents a system which recognizes the Korean sign language (KSL) and translates into a normal Korean text. A pair of data-gloves are used as the sensing device for detecting motions of hands and fingers. For efficient recognition of gestures and postures, a technique of efficient classification of motions is proposed and a fuzzy min-max neural network is adopted for on-line pattern recognition. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> A sign language recognition system is required to use information from both global features, such as hand movement and location, and local features, such as hand shape and orientation. We present an adequate local feature recognizer for a sign language recognition system. Our basic approach is to represent the hand images extracted from sign-language images as symbols which correspond to clusters by a clustering technique. The clusters are created from a training set of extracted hand images so that a similar appearance can be classified into the same cluster on an eigenspace. The experimental results indicate that our system can recognize a sign language word even in two-handed and hand-to-hand contact cases. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> To automatically interpret Japanese sign language (JSL), the recognition of signed words must be more accurate and the effects of extraneous gestures removed. We describe the parameters and the algorithms used to accomplish this. We experimented with 200 JSL sentences and demonstrated that recognition performance could be considerably improved. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> Gesture based applications widely range from replacing the traditional mouse as a position device to virtual reality and communication with the deaf. The article presents a fuzzy rule based approach to spatio-temporal hand gesture recognition. This approach employs a powerful method based on hyperrectangutar composite neural networks (HRCNNs) for selecting templates. Templates for each hand shape are represented in the form of crisp IF-THEN rules that are extracted from the values of synaptic weights of the corresponding trained HRCNNs. Each crisp IF-THEN rule is then fuzzified by employing a special membership function in order to represent the degree to which a pattern is similar to the corresponding antecedent part. When an unknown gesture is to be classified, each sample of the unknown gesture is tested by each fuzzy rule. The accumulated similarity associated with all samples of the input is computed for each hand gesture in the vocabulary, and the unknown gesture is classified as the gesture yielding the highest accumulative similarity. Based on the method we can implement a small-sized dynamic hand gesture recognition system. Two databases which consisted of 90 spatio-temporal hand gestures are utilized for verifying its performance. An encouraging experimental result confirms the effectiveness of the proposed method. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> Automatic gesture recognition systems generally require two separate processes: a motion sensing process where some motion features are extracted from the visual input; and a classification process where the features are recognised as gestures. We have developed the Hand Motion Understanding (HMU) system that uses the combination of a 3D model-based hand tracker for motion sensing and an adaptive fuzzy expert system for motion classification. The HMU system understands static and dynamic hand signs of the Australian Sign Language (Auslan). This paper presents the hand tracker that extracts 3D hand configuration data with 21 degrees-of-freedom (DOFs) from a 2D image sequence that is captured from a single viewpoint, with the aid of a colour-coded glove. Then the temporal sequence of 3D hand configurations detected by the tracker is recognised as a sign by an adaptive fuzzy expert system. The HMU system was evaluated with 22 static and dynamic signs. Before training the HMU system achieved 91% recognition, and after training it achieved over 95% recognition. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Schemes for Integrating Component-Level Results <s> This work discusses an approach for capturing and translating isolated gestures of American Sign Language into spoken and written words. The instrumented part of the system combines an AcceleGlove and a two-link arm skeleton. Gestures of the American Sign Language are broken down into unique sequences of phonemes called poses and movements, recognized by software modules trained and tested independently on volunteers with different hand sizes and signing ability. Recognition rates of independent modules reached up to 100% for 42 postures, orientations, 11 locations and 7 movements using linear classification. The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy. The system proved to be scalable: when the lexicon was extended to 176 signs and tested without retraining, the accuracy was 95%. This represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs). <s> BIB012
A common approach is to hand-code the categories of handshape, hand orientation, hand location, and movement type that make up each sign in the vocabulary, forming a lexicon of sign definitions. Classifying the sign label from component-level results is then performed by comparing the ideal lexicon categories with the corresponding recognized components ( BIB012 , BIB007 , BIB004 , BIB008 , BIB009 , BIB001 , BIB005 ). Various methods of performing this matching operation have been implemented; for example, Vamplew and Adams BIB005 employed a nearest-neighbor algorithm with a heuristic distance measure for matching sign word candidates. In Sagawa and Takeuchi BIB008 , the dictionary entries defined the mean and variance (which were learned from training examples) of handshape, orientation, and motion type attributes as well as the degree of overlap in the timing of these components. Candidate sign words were then given a probability score based on the actual values of the component attributes in the input gesture data. In Su BIB009 , scoring was based on an accumulated similarity measure of input handshape data from the first and last 10 sample vectors of a gesture. A major assumption was that signs can be distinguished based on just the starting and ending handshapes. Liang and Ouhyoung BIB006 classified all four gesture components using HMMs. Classification at the sign and sentence level was then accomplished using dynamic programming, taking into account the probability of the handshape, location, orientation, and movement components according to dictionary definitions as well as unigram and bigram probabilities of the sign gestures. Methods based on HMMs include Gao et al. BIB010 , where HMMs model individual sign words while observations of the HMM states correspond to component-level labels for position, orientation, and handshape, which were classified by MLPs. Vogler proposed the Parallel HMM algorithm to model gesture components and recognize continuous signing in sentences. The right hand's shape, movement, and location, along with the left hand's movement and location were represented by separate HMM channels which were trained with relevant data and features. For recognition, individual HMM networks were built in each channel and a modified Viterbi decoding algorithm searched through all the networks in parallel. Path probabilities from each network that went through the same sequence of words were combined (Fig. 5b) . Tanibata et al. proposed a similar scheme where output probabilities from HMMs which model the right and left hand's gesture data were multiplied together for isolated word recognition. Waldron and Kim BIB003 combined component-level results (from handshape, hand location, orientation, and movement type classification) with NNs-experimenting with MLPs as well as Kohonen self-organizing maps. The self-organizing map performed slightly worse than the MLP (83 percent versus 86 percent sign recognition accuracy), but it was possible to relabel the map to recognize new signs without requiring additional training data (experimental results were given for relabeling to accomodate two new signs). In an adaptive fuzzy expert system ( BIB002 ) by Holden and Owens BIB011 , signs were classified based on start and end handshapes and finger motion, using triangular fuzzy membership functions, whose parameters were found from training data. An advantage of decoupling component-level and signlevel classification is that fewer classes would need to be distinguished at the component level. This conforms with the findings of sign linguists that there are a small, limited number of categories in each of the gesture components which can be combined to form a large number of sign words. For example, in Liang and Ouhyoung BIB006 , the most number of classes at the component-level was 51 categories (for handshape), which is smaller than the 71 to 250 sign words that were recognized. Though some of these works may have small vocabularies (e.g., 22 signs in ), their focus, nevertheless, is on developing frameworks that allow scaling to large vocabularies. In general, this approach enables the component-level classifiers to be simpler, and with fewer parameters to be learned, due to the fewer number of classes to be distinguished and to the reduced input dimensions (since only the relevant component features are input to each classifier). In the works where sign-level classification was based on a lexicon of sign definitions, only training data for component-level classification was required and not at the whole-sign level ( BIB012 , BIB004 , BIB006 , BIB009 , BIB001 , BIB005 , ). Furthermore, new signs can be recognized without retraining the component-level classifiers, if they cover all categories of components that may appear in signs. For example, the system in Hernandez-Rebollar et al. BIB012 trained to classify 30 signs, can be expanded to classify 176 new signs by just adding their descriptions into the lexicon. In addition, approaches that do not require any training at the sign-level may be the most suitable for dealing with inflections and other grammatical processes in signing. As described in Section 2.2 and Appendix A (which can be found at www.computer.org/publications/dlib), the citation form of a sign can be systematically modified in one or more of its components to result in an inflected or derived sign form. This increases the vocabulary size to many more times than the number of lexical signs, with a correspondingly increased data requirement if training is required at the sign level. However, there is a limited number of ways in which these grammatical processes occur; hence, much less training data would be required if these processes could be recognized at the component level.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> We explore recognition implications of understanding gesture communication, having chosen American sign language as an example of a gesture language. An instrumented glove and specially developed software have been used for data collection and labeling. We address the problem of recognizing dynamic signing, i.e. signing performed at natural speed. Two neural network architectures have been used for recognition of different types of finger-spelled sentences. Experimental results are presented suggesting that two features of signing affect recognition accuracy: signing frequency which to a large extent can be accounted for by training a network on the samples of the respective frequency; and coarticulation effect which a network fails to identify. As a possible solution to coarticulation problem two post-processing algorithms for temporal segmentation are proposed and experimentally evaluated. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> Hand gesture segmentation is a difficult problem that must be overcome if gestural interfaces are to be practical. This paper sets out a recognition-led approach that focuses on the actual recognition techniques required for gestural interaction. Within this approach, a holistic view of the gesture input data stream is taken that considers what links the low-level and high-level features of gestural communication. Using this view, a theory is proposed that a state of high hand tension can be used as a gesture segmentation cue for certain classes of gestures. A model of hand tension is developed and then applied successfully to segment two British Sign Language sentence fragments. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> This work presents a design for a human computer interface capable of recognizing 25 gestures from the international hand alphabet in real-time. Principal Component Analysis (PCA) is used to extract features from images of gestures. The features represent gesture images in terms of an optimal coordinate system, in which the classes of gestures make up clusters. The system is divided into two parts: an off-line and an on-line part. The feature selection and generation of a classifier is performed off-line. On-line the obtained features and the classifier are used to classify new and unknown gesture images in real-time. Results show that an overall off-line recognition rate averaging 99% on 1500 images is achieved when trained on 1000 other images. The on-line system runs at 14 frames per second. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> To automatically interpret Japanese sign language (JSL), the recognition of signed words must be more accurate and the effects of extraneous gestures removed. We describe the parameters and the algorithms used to accomplish this. We experimented with 200 JSL sentences and demonstrated that recognition performance could be considerably improved. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> A divide-and-conquer approach is presented for signer-independent continuous Chinese Sign Language(CSL) recognition in this paper. The problem of continuous CSL recognition is divided into the subproblems of isolated CSL recognition. The simple recurrent network(SRN) and the hidden Markov models(HMM) are combined in this approach. The improved SRN is introduced for segmentation of continuous CSL. Outputs of SRN are regarded as the states of HMM, and the Lattice Viterbi algorithm is employed to search the best word sequence in the HMM framework. Experimental results show SRN/HMM approach has better performance than the standard HMM one. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Main Issues in the Classification of Sign Gestures <s> In this paper 3-layer feedforward network is introduced to recognize Chinese manual alphabet, and Single Parameter Dynamic Search Algorithm(SPDS) is used to learn net parameters. In addition, a recognition algorithm for recognizing manual alphabets based on multi-features and multi-classifiers is proposed to promote the recognition performance of finger-spelling. From experiment result, it is shown that Chinese finger-spelling recognition based on multi-features and multi-classifiers outperforms its recognition based on single-classifier. <s> BIB007
The success of the works reported in the literature should not be measured just in terms of recognition rate but also in terms of how well they deal with the main issues involved in classification of sign gestures. In the following, we consider issues which apply to both vision-based and direct-measure device approaches. For a discussion of imaging environment constraints and restrictions, and feature estimation issues pertaining to vision-based approaches, the reader is referred to Sections 3.1 and 3.2. Tables 2 and 3 reveal that most of the works deal with isolated sign recognition where the user either performs the signs one at a time, starting and ending at a neutral position, or with exaggerated pauses, or while applying an external switch between each word. Extending isolated recognition to continuous signing requires automatic detection of word boundaries so that the recognition algorithm can be applied on the segmented signs. As such, valid sign segments where the movement trajectory, handshape, and orientation are meaningful parts of the sign need to be distinguished from movement epenthesis segments, where the hand(s) are merely transiting from the ending location and hand configuration of one sign to the start of the next sign. The general approach for explicit segmentation uses a subset of features from gesture data as cues for boundary detection. Sagawa and Takeuchi BIB005 considered a minimum in the hand velocity, a minimum in the differential of glove finger flexure values and a large change in motion trajectory angle as candidate points for word boundaries. Transition periods and valid word segments were further distinguished by calculating the ratio between the minimum acceleration value and maximum velocity in the segment-a minimal ratio indicated a word, otherwise a transition. In experiments with 100 JSL sentences, 80.2 percent of the word segments were correctly detected, while 11.2 percent of the transition segments were misjudged as words. In contrast, Liang and Ouhyoung BIB004 considered a sign gesture as consisting of a sequence of handshapes connected by motion and assumed that valid sign words are contained in segments where the timevarying parameters in finger flexure data dropped below a threshold. The handshape, orientation, location, and movement type in these segments were classified, while sections with large finger movement were ignored. The limitation of these methods which use a few gesture features as cues arises from the difficulty in specifying rules for determining sign boundaries that would apply in all instances. For example, BIB005 assumed that sign words are contained in segments where there is significant hand displacement and finger movement while boundary points are characterized by a low value in those parameters. However, in general, this may not always occur at sign boundaries. On the other hand, the method in BIB004 might miss important data for signs that involve a change in handshape co-occuring with a meaningful movement trajectory. A promising approach was proposed in Fang et al. BIB006 where the appropriate features for segmentation cues were automatically learned by a self-organizing map from finger flexure and tracker position data. The self-organizing map output was input to a Recurrent NN, which processed data in temporal context to label data frames as the left boundary, right boundary, or interior of a segment with 98.8 percent accuracy. Transient frames near segment boundaries were assumed to be movement epenthesis and ignored. A few researchers considered segmentation in fingerspelling sequences, where the task is to mark points where valid handshapes occur. Kramer and Leifer and Wu and Gao BIB007 performed handshape recognition during segments where there was a drop in the velocity of glove finger flexure data. Erenshteyn et al. BIB001 extracted segments by low-pass filtering and derivative analysis and discarded transitions and redundant frames by performing recognition only at the midpoint of these segments. Segmentation accuracy was 88-92 percent. Harling and Edwards BIB002 used the sum of finger tension values as a cue-a maximum indicated a valid handshape while a minimum indicated a transition. The finger tension values were calculated as a function of finger-bend values. Birk et al. BIB003 recognized fingerspelling from image sequences and used frame differencing to discard image frames with large motion.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> Hitherto, the major challenge to sign language recognition is how to develop approaches that scale well with increasing vocabulary size. We present an approach to large vocabulary, continuous Chinese sign language (CSL) recognition that uses phonemes instead of whole signs as the basic units. Since the number of phonemes is limited, HMM-based training and recognition of the CSL signal becomes more tractable and has the potential to recognize enlarged vocabularies. Furthermore, the proposed method facilitates the CSL recognition when the finger-alphabet is blended with gestures. About 2400 phonemes are defined for CSL. One HMM is built for each phoneme, and then the signs are encoded based on these phonemes. A decoder that uses a tree-structured network is presented. Clustering of the Gaussians on the states, the language model and N-best-pass is used to improve the performance of the system. Experiments on a 5119 sign vocabulary are carried out, and the result is exciting. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> A new method to recognize continuous sign language based on hidden Markov model is proposed. According to the dependence of linguistic context, connections between elementary subwords are classified as strong connection and weak connection. The recognition of strong connection is accomplished with the aid of subword trees, which describe the connection of subwords in each sign language word. In weak connection, the main problem is how to extract the best matched subwords and find their end-points with little help of context information. The proposed method improves the summing process of the Viterbi decoding algorithm which is constrained in every individual model, and compares the end score at each frame to find the ending frame of a subword. Experimental results show an accuracy of 70% for continuous sign sentences that comprise no more than 4 subwords. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Continuous Signing in Sentences <s> The major challenges that sign language recognition (SLR) now faces are developing methods that solve large vocabulary continuous sign problems. In this paper, large vocabulary continuous SLR based on transition movement models is proposed. The proposed method employs the temporal clustering algorithm to cluster a large amount of transition movements, and then the corresponding training algorithm is also presented for automatically segmenting and training these transition movement models. The clustered models can improve the generalization of transition movement models, and are very suitable for large vocabulary continuous SLR. At last, the estimated transition movement models, together with sign models, are viewed as candidate models of the Viterbi search algorithm for recognizing continuous sign language. Experiments show that continuous SLR based on transition movement models has good performance over a large vocabulary of 5113 signs. <s> BIB007
A popular approach for dealing with continuous signs without explicit segmentation as above is to use HMMs for implicit sentence segmentation (as mentioned in Section 3.3.1). In continuous speech recognition, coarticulation effects due to neighboring phonemes predominantly result in pronunciation variations. This is usually accounted for by modeling sounds in context-for example, triphones model a phoneme in the context of its preceding and succeeding phonemes, thereby tripling the number of HMM models required. The various methods that have been employed in dealing with sign transitions are generally different from the context-dependent models in speech. For example, Starner et al. BIB002 and Bauer and Kraiss BIB004 used one HMM to model each sign word (or subunit, in BIB004 ) and trained the HMMs using data from entire sentences in an embedded training scheme ( ), in order to incorporate variations in sign appearance during continuous signing. This would result in a large variation in the observations of the initial and ending states of a HMM due to the large variations in the appearance of all the possible movement epenthesis that could occur between two signs. This may result in loss of modeling accuracy for valid sign words. Wang et al. ([146] , BIB005 ) used a different approach where they trained HMMs on isolated words and subunits and chained them together only at recognition time, while employing measures to detect and discount possible movement epenthesis frames-signs were assumed to end in still frames, and the following frames were considered to be transition frames. This method of training with isolated sign data would not be able to accomodate processes where the appearance of a sign is affected by its context (e.g., hold deletion). Other works accounted for movement epenthesis by explicitly modeling it. In Assan and Grobel BIB001 , all transitions between signs go through a single state, while in Gao et al. BIB003 separate HMMs model the transitions between each unique pair of signs that occur in sequence (Fig. 7) . In more recent experiments BIB007 , the number of such transition HMMs was reduced by clustering the transition frames. In Vogler , separate HMMs model the transitions between each unique ending and starting location of signs (Fig. 6a) . In BIB003 , BIB007 and , all HMM models are trained on data from entire sentences and, hence, in principle, variations in sign appearance due to context are accounted for. Volger also assessed the advantage of explicit epenthesis modeling by making experimental comparisons with context-independent HMMs (as used in BIB002 , BIB004 ), and context-dependent biphone HMMs (one HMM is trained for every two valid combination of signs). On a test set of 97 sentences constructed from a 53-sign vocabulary, explicit epenthesis modeling was shown to have the best word recognition accuracy (92.1 percent) while context-independent modeling had the worst (87.7 percent versus 89.9 percent for biphone models). Yuan et al. BIB006 used HMMs for continuous sign recognition without employing a language model. They alternated word recognition with movement epenthesis detection. The ending data frame of a word was detected when the attempt to match subsequent frames to the word's last state produced a sharp drop in the probability scores. The next few frames were regarded as movement epenthesis if there was significant movement of a short duration and were discarded. Word recognition accuracy for sentences employing a vocabulary of 40 CSL signs was 70 percent.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A gesture recognition method for Japanese sign language is presented. We have developed a posture recognition system using neural networks which could recognize a finger alphabet of 42 symbols. We then developed a gesture recognition system where each gesture specifies a word. Gesture recognition is more difficult than posture recognition because it has to handle dynamic processes. To deal with dynamic processes we use a recurrent neural network. Here, we describe a gesture recognition method which can recognize continuous gesture. We then discuss the results of our research. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> In this paper, a framework for maximum a posteriori (MAP) estimation of hidden Markov models (HMM) is presented. Three key issues of MAP estimation, namely, the choice of prior distribution family, the specification of the parameters of prior densities, and the evaluation of the MAP estimates, are addressed. Using HMM's with Gaussian mixture state observation densities as an example, it is assumed that the prior densities for the HMM parameters can be adequately represented as a product of Dirichlet and normal-Wishart densities. The classical maximum likelihood estimation algorithms, namely, the forward-backward algorithm and the segmental k-means algorithm, are expanded, and MAP estimation formulas are developed. Prior density estimation issues are discussed for two classes of applications/spl minus/parameter smoothing and model adaptation/spl minus/and some experimental results are given illustrating the practical interest of this approach. Because of its adaptive nature, Bayesian learning is shown to serve as a unified approach for a wide range of speech recognition applications. > <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> The design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlove mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger in relation to the metacarpophalangeal joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling. > <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> We present a system for recognising hand-gestures in Sign language. The system works in real-time and uses input from a colour video camera. The user wears different coloured gloves on either hand and colour matching is used to distinguish the hands from each other and from the background. So far the system has been tested in fixed lighting conditions, with the camera a fixed distance from the user. The system is user-dependent. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> We describe a video-based analysis system for acquisition and classification of hand-arm motion concerning German sign language. These motions are recorded with a single video camera by use of a modular framegrabber system. Data acquisition as well as motion classification are performed in real-time. A colour coded glove and coloured markers at the elbow and shoulder are used. These markers are segmented from the recorded input images as a first step of image processing. Thereafter features of these coloured areas are calculated which are used for determining the 20 positions for each frame and hence the positions of hand and arm. The missing third dimension is derived from a geometric model of the human hand-arm system. The sequence of the position data is converted into a certain representation of motion. Motion is derived from rule-based classification of the performed gesture, which yields a recognition rate of 95%. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This work presents a design for a human computer interface capable of recognizing 25 gestures from the international hand alphabet in real-time. Principal Component Analysis (PCA) is used to extract features from images of gestures. The features represent gesture images in terms of an optimal coordinate system, in which the classes of gestures make up clusters. The system is divided into two parts: an off-line and an on-line part. The feature selection and generation of a classifier is performed off-line. On-line the obtained features and the classifier are used to classify new and unknown gesture images in real-time. Results show that an overall off-line recognition rate averaging 99% on 1500 images is achieved when trained on 1000 other images. The on-line system runs at 14 frames per second. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A new pattern matching method, the partly-hidden Markov model, is proposed for gesture recognition. The hidden Markov model, which is widely used for the time series pattern recognition, can deal with only piecewise stationary stochastic process. We solved this problem by introducing the modified second order Markov model, in which the first state is hidden and the second one is observable. As shown by the results of 6 sign-language recognition test, the error rate was improved by 73% compared with normal HMM. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the user's unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper presents a system for the recognition of sign language based on a theory of shape representation using size functions proposed by P. Frosini [5]. Our system consists of three modules: feature extraction, sign representation and sign recognition. The first performs an edge detection operation, the second uses size functions and inertia moments to represent hand signs, and the last uses a neural network to recognize hand gestures. Sign representation is an important step which we will deal with. Unlike previous work [15, 16], a new approach to the representation of hand gestures is proposed, based on size functions. Each sign is represented by means of a feature vector computed from a new pair of moment-based size functions. The work reported here indicates that moment-based size functions can be effectively used for the recognition of sign language even in the presence of shape changes due to differences in hands, position, style of signing, and viewpoint. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> In this paper, a system designed for helping the deaf to communicate with others is presented. Some useful new ideas are proposed in design and implementation. An algorithm based on geometrical analysis for the purpose of extracting invariant feature to signer position is presented. An ANN–DP combined approach is employed for segmenting subwords automatically from the data stream of sign signals. To tackle the epenthesis movement problem, a DP-based method has been used to obtain the context-dependent models. Some techniques for system implementation are also given, including fast matching, frame prediction and search algorithms. The implemented system is able to recognize continuous large vocabulary Chinese Sign Language. Experiments show that proposed techniques in this paper are efficient on either recognition speed or recognition performance. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Automatic gesture recognition systems generally require two separate processes: a motion sensing process where some motion features are extracted from the visual input; and a classification process where the features are recognised as gestures. We have developed the Hand Motion Understanding (HMU) system that uses the combination of a 3D model-based hand tracker for motion sensing and an adaptive fuzzy expert system for motion classification. The HMU system understands static and dynamic hand signs of the Australian Sign Language (Auslan). This paper presents the hand tracker that extracts 3D hand configuration data with 21 degrees-of-freedom (DOFs) from a 2D image sequence that is captured from a single viewpoint, with the aid of a colour-coded glove. Then the temporal sequence of 3D hand configurations detected by the tracker is recognised as a sign by an adaptive fuzzy expert system. The HMU system was evaluated with 22 static and dynamic signs. Before training the HMU system achieved 91% recognition, and after training it achieved over 95% recognition. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Sign language is the language used by the deaf, which is a comparatively steadier expressive system composed of signs corresponding to postures and motions assisted by facial expression. The objective of sign language recognition research is to "see" the language of deaf. The integration of sign language recognition and sign language synthesis jointly comprise a "human-computer sign language interpreter", which facilitates the interaction between deaf and their surroundings. Considering the speed and performance of the recognition system, Cyberglove is selected as gesture input device in our sign language recognition system, Semi-Continuous Dynamic Gaussian Mixture Model (SCDGMM) is used as recognition technique, and a search scheme based on relative entropy is proposed and is applied to SCDGMM-based sign word recognition. Comparing with SCDGMM recognizer without searching scheme, the recognition time of SCDGMM recognizer with searching scheme reduces almost 15 times. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Gesture based applications widely range from replacing the traditional mouse as a position device to virtual reality and communication with the deaf. The article presents a fuzzy rule based approach to spatio-temporal hand gesture recognition. This approach employs a powerful method based on hyperrectangutar composite neural networks (HRCNNs) for selecting templates. Templates for each hand shape are represented in the form of crisp IF-THEN rules that are extracted from the values of synaptic weights of the corresponding trained HRCNNs. Each crisp IF-THEN rule is then fuzzified by employing a special membership function in order to represent the degree to which a pattern is similar to the corresponding antecedent part. When an unknown gesture is to be classified, each sample of the unknown gesture is tested by each fuzzy rule. The accumulated similarity associated with all samples of the input is computed for each hand gesture in the vocabulary, and the unknown gesture is classified as the gesture yielding the highest accumulative similarity. Based on the method we can implement a small-sized dynamic hand gesture recognition system. Two databases which consisted of 90 spatio-temporal hand gestures are utilized for verifying its performance. An encouraging experimental result confirms the effectiveness of the proposed method. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> The accurate classification of hand gestures is crucial in the development of novel hand gesture-based systems designed for human-computer interaction (HCI) and for human alternative and augmentative communication (HAAC). A complete vision-based system, consisting of hand gesture acquisition, segmentation, filtering, representation and classification, is developed to robustly classify hand gestures. The algorithms in the subsystems are formulated or selected to optimality classify hand gestures. The gray-scale image of a hand gesture is segmented using a histogram thresholding algorithm. A morphological filtering approach is designed to effectively remove background and object noise in the segmented image. The contour of a gesture is represented by a localized contour sequence whose samples are the perpendicular distances between the contour pixels and the chord connecting the end-points of a window centered on the contour pixels. Gesture similarity is determined by measuring the similarity between the localized contour sequences of the gestures. Linear alignment and nonlinear alignment are developed to measure the similarity between the localized contour sequences. Experiments and evaluations on a subset of American Sign Language (ASL) hand gestures show that, by using nonlinear alignment, no gestures are misclassified by the system. Additionally, it is also estimated that real-time gesture classification is possible through the use of a high-speed PC, high-speed digital signal processing chips and code optimization. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper introduces a model-based hand gesture recognition system, which consists of three phases: feature extraction, training, and recognition. In the feature extraction phase, a hybrid technique combines the spatial (edge) and the temporal (motion) information of each frame to extract the feature images. Then, in the training phase, we use the principal component analysis (PCA) to characterize spatial shape variations and the hidden Markov models (HMM) to describe the temporal shape variations. A modified Hausdorff distance measurement is also applied to measure the similarity between the feature images and the pre-stored PCA models. The similarity measures are referred to as the possible observations for each frame. Finally, in recognition phase, with the pre-trained PCA models and HMM, we can generate the observation patterns from the input sequences, and then apply the Viterbi algorithm to identify the gesture. In the experiments, we prove that our method can recognize 18 different continuous gestures effectively. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Hand gestures play an important role in communication between people during their daily lives. But the extensive use of hand gestures as a mean of communication can be found in sign languages. Sign language is the basic communication method between deaf people. A translator is usually needed when an ordinary person wants to communicate with a deaf one. The work presented in this paper aims at developing a system for automatic translation of gestures of the manual alphabets in the Arabic sign language. In doing so, we have designed a collection of ANFIS networks, each of which is trained to recognize one gesture. Our system does not rely on using any gloves or visual markings to accomplish the recognition job. Instead, it deals with images of bare hands, which allows the user to interact with the system in a natural way. An image of the hand gesture is processed and converted into a set of features that comprises of the lengths of some vectors which are selected to span the fingertips' region. The extracted features are rotation, scale, and translation invariat, which makes the system more flexible. The subtractive clustering algorithm and the least-squares estimator are used to identify the fuzzy inference system, and the training is achieved using the hybrid learning algorithm. Experiments revealed that our system was able to recognize the 30 Arabic manual alphabets with an accuracy of 93.55%. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A divide-and-conquer approach is presented for signer-independent continuous Chinese Sign Language(CSL) recognition in this paper. The problem of continuous CSL recognition is divided into the subproblems of isolated CSL recognition. The simple recurrent network(SRN) and the hidden Markov models(HMM) are combined in this approach. The improved SRN is introduced for segmentation of continuous CSL. Outputs of SRN are regarded as the states of HMM, and the Lattice Viterbi algorithm is employed to search the best word sequence in the HMM framework. Experimental results show SRN/HMM approach has better performance than the standard HMM one. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Hitherto, the major challenge to sign language recognition is how to develop approaches that scale well with increasing vocabulary size. We present an approach to large vocabulary, continuous Chinese sign language (CSL) recognition that uses phonemes instead of whole signs as the basic units. Since the number of phonemes is limited, HMM-based training and recognition of the CSL signal becomes more tractable and has the potential to recognize enlarged vocabularies. Furthermore, the proposed method facilitates the CSL recognition when the finger-alphabet is blended with gestures. About 2400 phonemes are defined for CSL. One HMM is built for each phoneme, and then the signs are encoded based on these phonemes. A decoder that uses a tree-structured network is presented. Clustering of the Gaussians on the states, the language model and N-best-pass is used to improve the performance of the system. Experiments on a 5119 sign vocabulary are carried out, and the result is exciting. <s> BIB024 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Principle Component Analysis (PCA) and Multiple Discriminant Analysis (MDA) have long been used for the appearance-based hand posture recognition. In this paper, we propose a novel PCA/MDA scheme for hand posture recognition. Unlike other PCA/MDA schemes, the PCA layer acts as a crude classification. Since posture alone cannot provide sufficient discriminating information, each input pattern will be given a likelihood of being in the nodes of PCA layers, instead of a strict division. Based on the Expectation-Maximization (EM) algorithm, we introduce three methods to estimate the parameters for this crude classification during training. The experiments on a 110-sign vocabulary show a significant improvement compared with the global PCA/MDA. <s> BIB025 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> The paper presents a portable system and method for recognizing the 26 hand shapes of the American Sign Language alphabet, using a novel glove-like device. Two additional signs, 'space', and 'enter' are added to the alphabet to allow the user to form words or phrases and send them to a speech synthesizer. Since the hand shape for a letter varies from one signer to another, this is a 28-class pattern recognition system. A three-level hierarchical classifier divides the problem into "dispatchers" and "recognizers." After reducing pattern dimension from ten to three, the projection of class distributions onto horizontal planes makes it possible to apply simple linear discrimination in 2D, and Bayes' Rule in those cases where classes had features with overlapped distributions. Twenty-one out of 26 letters were recognized with 100% accuracy; the worst case, letter U, achieved 78%. <s> BIB026 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Abstract In this paper, we introduce a hand gesture recognition system to recognize continuous gesture before stationary background. The system consists of four modules: a real time hand tracking and extraction, feature extraction, hidden Markov model (HMM) training, and gesture recognition. First, we apply a real-time hand tracking and extraction algorithm to trace the moving hand and extract the hand region, then we use the Fourier descriptor (FD) to characterize spatial features and the motion analysis to characterize the temporal features. We combine the spatial and temporal features of the input image sequence as our feature vector. After having extracted the feature vectors, we apply HMMs to recognize the input gesture. The gesture to be recognized is separately scored against different HMMs. The model with the highest score indicates the corresponding gesture. In the experiments, we have tested our system to recognize 20 different gestures, and the recognizing rate is above 90%. <s> BIB027 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> This work presents a hieraarchical approach to recogniz isolated 3-D hand gesture trajectories for signing exact English (SEE). SEE hand gestures can be periodic as well as non-periodic. We first differentiate between periodic and non-periodic gestures followed by recognition of individual gestures. After periodicity detection, non-periodic trajectories are classified into 8 classes and periodic trajectories are classified into 4 classes. A Polhemus tracker is used to provide the input data. Periodicity detection is based on Fourier analysis and hand trajectories are recognized by vector quantization principal component analysis (VQPCA). The average periodicity detection accuracy is 95.9%. The average recognition rates with VQPCA for non-periodic and periodic gestures are 97.3% and 97.0% respectively. In comparison, k-means clustering yielded 87.0% and 85.1%, respectively. <s> BIB028 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> Grammatical information conveyed through systematic temporal and spatial movement modifications is an integral aspect of sign language communication. We propose to model these systematic variations as simultaneous channels of information. Classification results at the channel level are output to Bayesian networks which recognize both the basic gesture meaning and the grammatical information (here referred to as layered meanings). With a simulated vocabulary of 6 basic signs and 5 possible layered meanings, test data for eight test subjects was recognized with 85.0% accuracy. We also adapt a system trained on three test subjects to recognize gesture data from a fourth person, based on a small set of adaptation data. We obtained gesture recognition accuracy of 88.5% which is a 75.7% reduction in error rate as compared to the unadopted system. <s> BIB029 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Signer Independence <s> A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, it readily handles situations where some data entries are missing. Two, a Bayesian network can be used to learn causal relationships, and hence can be used to gain understanding about a problem domain and to predict the consequences of intervention. Three, because the model has both a causal and probabilistic semantics, it is an ideal representation for combining prior knowledge (which often comes in causal form) and data. Four, Bayesian statistical methods in conjunction with Bayesian networks offer an efficient and principled approach for avoiding the overfitting of data. In this paper, we discuss methods for constructing Bayesian networks from prior knowledge and summarize Bayesian statistical methods for using data to improve these models. With regard to the latter task, we describe methods for learning both the parameters and structure of a Bayesian network, including techniques for learning with incomplete data. In addition, we relate Bayesian-network methods for learning to techniques for supervised and unsupervised learning. We illustrate the graphical-modeling approach using a real-world case study. <s> BIB030
Analogous to speaker independence in speech recognition, an ideal sign recognition system would work "right out of the box," giving good recognition accuracy for signers not represented in the training data set (unregistered signers). Sources of interperson variations that could impact sign recognition accuracy include different personal signing styles, different sign usage due to geographical or social background ( ), and fit of gloves in direct-measure device approaches. In this area, sign recognition lags far behind speech-many works report signer-dependent results where a single signer provided both training and test data ( BIB023 , BIB015 , BIB016 , BIB011 , BIB007 , BIB002 , BIB012 , BIB005 , BIB001 , , , BIB024 , BIB017 ), while other works have only 2 to 10 signers in the training and test set ( BIB008 , BIB025 , BIB019 , BIB014 , BIB026 , BIB006 , , BIB018 , , BIB013 , BIB004 ). The most number of test subjects was 20 in BIB027 , BIB020 , BIB009 and 60 for alphabet handshape recognition in BIB021 . This is still significantly less than the number of test speakers for which good results were reported in speech systems. When the number of signers in the training set is small, results on test data from unregistered signers can be severely degraded. In Kadous , accuracy decreased from an average of 80 percent to 15 percent when the system that was trained on four signers was tested on an unregistered signer. In Assan and Grobel BIB010 , accuracy for training on one signer and testing on a different signer was 51.9 percent compared to 92 percent when the same signer supplied both training and test data. Better results were obtained when data from more signers was used for training. In Vamplew and Adams BIB013 , seven signers provided training data; test data from these same (registered) signers was recognized with 94.2 percent accuracy versus 85.3 percent accuracy for three unregistered signers. Fang et al. BIB022 trained a recognition system for continuous signing on five signers and obtained test data accuracy of 92.1 percent for these signers, compared to 85.0 percent for an unregistered signer. Classification accuracy for unregistered signers is also relatively good when only handshape is considered, perhaps due to less interperson variation as compared to the other gesture components. For example, BIB014 and BIB018 reported 93-96 percent handshape classification accuracy for registered signers versus 85-91 percent accuracy for unregistered signers. Interestingly, Kong and Ranganath BIB028 showed similarly good results for classifying 3D movement trajectories. Test data from six unregistered signers were classified with 91.2 percent accuracy versus 99.7 percent for test data from four registered signers. In speech recognition, performance for a new speaker can be improved by using a small amount of data from the new speaker to adapt a prior trained system without retraining the system from scratch. The equivalent area of signer adaptation is relatively new. Some experimental results were shown in Ong and Ranganath BIB029 where speaker adaptation methods were modified to perform maximum a posteriori estimation BIB003 on component-level classifiers and Bayesian estimation of Bayesian Network parameters BIB030 . This gave 88.5 percent gesture recognition accuracy for test data from a new signer by adapting a system that was previously trained on three other signers -a 75.7 percent reduction in error rate as compared to using the unadapted system.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> This paper explores the use of local parametrized models of image motion for recovering and recognizing the non-rigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model non-rigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performs with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences. > <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> This paper describes a method of real-time facial expression recognition which is based on automatic measurement of the facial features' dimension and the positional relationship between them. The method is composed of two parts, the facial feature extraction using matching techniques and the facial expression recognition using statistics of position and dimension of the features. The method is implemented in an experimental hardware system and the performance is evaluated. The extraction rates of the facial-area, the mouth and the eyes are about 100%, 96% and 90%, respectively, and the recognition rates of facial expression such as normal, angry, surprise, smile and sad expression are 54%, 89%, 86%, 53% and 71%, respectively, for a specific person. The whole processing speed is about 15 frames/second. Finally, we touch on some applications such as man-machine interface, automatic generation of facial graphic animation and sign language translation using facial expression recognition techniques. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Humans detect and interpret faces and facial expressions in a scene with little or no effort. Still, development of an automated system that accomplishes this task is rather difficult. There are several related problems: detection of an image segment as a face, extraction of the facial expression information, and classification of the expression (e.g., in emotion categories). A system that performs these operations accurately and in real time would form a big step in achieving a human-like interaction between man and machine. The paper surveys the past work in solving these problems. The capability of the human visual system with respect to these problems is discussed, too. It is meant to serve as an ultimate goal and a guide for determining recommendations for development of an automatic facial expression analyzer. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Most automatic facial expression analysis systems try to analyze emotion categories. However, psychologists argue that there is no straightforward way to classify, emotions from facial expressions. Instead, they propose FACS (facial action coding system), a de-facto standard for categorizing facial actions independent from emotional categories. We describe a system that recognizes asymmetric FACS action unit activities and intensities without the use of markers. Facial expression extraction is achieved by difference images that are projected into a sub-space using either PCA or ICA, followed by nearest neighbor classification. Experiments show that this holistic approach achieves a recognition performance comparable to marker-based facial expression analysis systems or human FACS experts for a single-subject database recorded under controlled conditions. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Abstract This paper discusses our expert system called Integrated System for Facial Expression Recognition (ISFER), which performs recognition and emotional classification of human facial expression from a still full-face image. The system consists of two major parts. The first one is the ISFER Workbench, which forms a framework for hybrid facial feature detection. Multiple feature detection techniques are applied in parallel. The redundant information is used to define unambiguous face geometry containing no missing or highly inaccurate data. The second part of the system is its inference engine called HERCULES, which converts low level face geometry into high level facial actions, and then this into highest level weighted emotion labels. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> This paper describes a vision-based method for recognizing the nonmanual information in Japanese Sign Language (JSL). This new modality information provides grammatical constraints useful for JSL word segmentation and interpretation. Our attention is focused on head motion, the most dominant non-manual information in JSL. We designed an interactive color-modeling scheme for robust face detection. Two video cameras are vertically arranged to take the frontal and profile image of the JSL user, and head motions are classified into eleven patterns. Moment-based feature and statistical motion feature are adopted to represent these motion patterns. Classification of the motion features is performed with linear discrimant analysis method. Initial experimental results show that the method has good recognition rate and can be realized in real-time. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> An apparatus for holding, releasing, and resetting a multiplicity of neutron absorbing balls within a safety assembly of a liquid metal nuclear reactor comprising vertically hinged trap doors resting on the shoulders of a generally cylindrical release valve, the actuation of which disengages the doors, permitting the poison balls above the doors to drop into the core. In the reset mode of operation a platform is raised, lifting the balls from the bottom of the core and swinging the trap doors upward until the balls are above the door hinges. The release valve is reset and the platform is lowered to reset the doors against the valve shoulders. In the disclosed embodiment, the valve is operated by a self-actuated, temperature responsive scram mechanism. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Non-manual signals (NMS) are grammatical elements in sign languages. They may convey information that reinforces or is additional to the hand signing. NMS are similar to facial expressions except that, unlike spontaneous emotions, NMS are deliberate gestures. This paper explores the use of Independent Component Analysis (ICA) and Gabor wavelet networks (GWNs) for recognising 3 upper face and 3 lower face expressions related to NMS. Independent component analysis and Gabor wavelet networks were compared as representations for these facial signals. Both representations provided good recognition performance. The method of using GWNs with 116 wavelets outperformed ICA (85.3% and 93.3% for upper and lower face respectively, compared to 78.7% and 92% for ICA). However, the GWN method is computationally more expensive. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> An automated system for detection of head movements is described The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> ANALYSIS OF NONMANUAL SIGNALS (NMS) 4.1 Issues <s> Over the last decade, automatic facial expression analysis has become an active research area that finds potential applications in areas such as more engaging human-computer interfaces, talking heads, image retrieval and human emotion analysis. Facial expressions reflect not only emotions, but other mental activities, social interaction and physiological signals. In this survey we introduce the most prominent automatic facial expression analysis methods and systems presented in the literature. Facial motion and deformation extraction approaches as well as classification methods are discussed with respect to issues such as face normalization, facial expression dynamics and facial expression intensity, but also with regard to their robustness towards environmental changes. <s> BIB011
Broadly, the main elements of NMS in SL involve facial expressions, head and body pose, and movement. Often body and especially head movements co-occur with facial expressions (e.g., a question is asked by thrusting the head forward while simultaneously raising the eyebrows). The head could also tilt to the side or rotate left/right. This is further complicated by hand gestures being performed on or in front of the face/head region. Thus, tracking of the head is required while it is undergoing rigid motion, with possible out-of-plane rotation and occlusion by hands. Further, the face has to be distinguished from the hands. Recent surveys BIB011 , BIB004 show much research interest in automatic analysis of facial expressions. However, these works generally cannot be directly applied to facial expressions in NMS due to their limited robustness and inability to characterize the temporal evolution of expressions. Most facial expression recognition approaches constrain faces to be fairly stationary and frontal to the camera. On the other hand, works that consider head tracking in less constrained environments do not include facial expression recognition. Black and Yacoob's local parametric model BIB001 is a notable exception-they successfully tracked facial features under significant rigid head motion and out-ofplane rotation and recognized six different expressions of emotions in video sequences. Though facial expressions in NMS involve articulators that include the cheeks, tongue, nose and chin, most local feature-based approaches only consider the mouth, eyes and eyebrows (e.g., BIB001 ). Facial expression has often been analyzed on static images of the peak expression, thereby ignoring the dynamics, timing, and intensity of the expression. This is not a good fit for NMS where different facial expressions are performed sequentially, and sometimes repetitively, evolving over a period of time. Thus, the timing of the expression in relation to the hand gestures produced, as well as the temporal evolution of the expression's intensity need to be determined. There are very few works that measure the intensity of facial expressions or which model the dynamics of expressions (examples of exceptions are BIB001 , BIB005 ). In many works, facial expression recognition is limited to the six basic emotions as defined by Ekman -happiness, sadness, surprise, fear, anger, disgust-plus the neutral expression, which involve the face as a whole. This is too constrained for NMS where the upper and lower face expressions can be considered to be separate, parallel channels of information that carry different grammatical information or semantic meaning . In this respect, the more promising approaches use a mid-level representation of facial action either defined by the researchers themselves ( BIB001 ) or which follow an existing coding scheme (MPEG-4 or Facial Action Coding System ). The recognition results of the mid-level representation code could in turn be used to interpret NMS facial expressions, in a fashion similar to ruled-based approaches which interpret recognized codes as emotion classes BIB001 , BIB006 . A few works that consider facial expression analysis BIB008 , BIB009 , BIB002 , BIB003 and head motion and pose analysis BIB010 , BIB007 in the context of SL are described in Appendix D (www.computer.org/publications/dlib). The body movements and postures involved in NMS generally consists of torso motion (without whole-body movement), for example, body leaning forwards/backwards or turning to the sides. So far, no work has specifically considered recognition of this type of body motion. Although there has been much work done in tracking and recognition of human activities that involve whole body movements, e.g., walking or dancing (as surveyed in ), these approaches may have difficulty in dealing with the subtler body motions exhibited in NMS.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> This paper describes a method of real-time facial expression recognition which is based on automatic measurement of the facial features' dimension and the positional relationship between them. The method is composed of two parts, the facial feature extraction using matching techniques and the facial expression recognition using statistics of position and dimension of the features. The method is implemented in an experimental hardware system and the performance is evaluated. The extraction rates of the facial-area, the mouth and the eyes are about 100%, 96% and 90%, respectively, and the recognition rates of facial expression such as normal, angry, surprise, smile and sad expression are 54%, 89%, 86%, 53% and 71%, respectively, for a specific person. The whole processing speed is about 15 frames/second. Finally, we touch on some applications such as man-machine interface, automatic generation of facial graphic animation and sign language translation using facial expression recognition techniques. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> Sign Language is a rich and expressive means of communication used by profoundly deaf people as an alternative to speech. Computer recognition of sign language represents a demanding research objective analogous to recognising continuous speech, but has many more potential applications in areas such as human body motion tracking and analysis, surveillance, video telephony, and non-invasive interfacing with virtual reality applications. In this paper, we survey those aspects of human body motion which are relevant to sign language, and outline an overall system architecture for computer vision-based sign language recognition. We then discuss the initial stages of processing required, and show how recognition of static manual and facial gestures can be used to provide the low-level features from which an integrated multi-channel dynamic gesture recognition system can be constructed. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> A person stands in front of a large projection screen on which is shown a checked floor. They say, "Make a table," and a wooden table appears in the middle of the floor."On the table, place a vase," they gesture using a fist relative to palm of their other hand to show the relative location of the vase on the table. A vase appears at the correct location."Next to the table place a chair." A chair appears to the right of the table."Rotate it like this," while rotating their hand causes the chair to turn towards the table."View the scene from this direction," they say while pointing one hand towards the palm of the other. The scene rotates to match their hand orientation.In a matter of moments, a simple scene has been created using natural speech and gesture. The interface of the future? Not at all; Koons, Thorisson and Bolt demonstrated this work in 1992 [23]. Although research such as this has shown the value of combining speech and gesture at the interface, most computer graphics are still being developed with tools no more intuitive than a mouse and keyboard. This need not be the case. Current speech and gesture technologies make multimodal interfaces with combined voice and gesture input easily achievable. There are several commercial versions of continuous dictation software currently available, while tablets and pens are widely supported in graphics applications. However, having this capability doesn't mean that voice and gesture should be added to every modeling package in a haphazard manner. There are numerous issues that must be addressed in order to develop an intuitive interface that uses the strengths of both input modalities.In this article we describe motivations for adding voice and gesture to graphical applications, review previous work showing different ways these modalities may be used and outline some general interface guidelines. Finally, we give an overview of promising areas for future research. Our motivation for writing this is to spur developers to build compelling interfaces that will make speech and gesture as common on the desktop as the keyboard and mouse. <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> Keywords: speech Reference EPFL-CONF-82543 Record created on 2006-03-10, modified on 2017-05-10 <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> We present a statistical approach to developing multimodal recognition systems and, in particular, to integrating the posterior probabilities of parallel input signals involved in the multimodal system. We first identify the primary factors that influence multimodal recognition performance by evaluating the multimodal recognition probabilities. We then develop two techniques, an estimate approach and a learning approach, which are designed to optimize accurate recognition during the multimodal integration process. We evaluate these methods using Quickset, a speech/gesture multimodal system, and report evaluation results based on an empirical corpus collected with Quickset. From an architectural perspective, the integration technique presented offers enhanced robustness. It also is premised on more realistic assumptions than previous multimodal systems using semantic fusion. From a methodological standpoint, the evaluation techniques that we describe provide a valuable tool for evaluating multimodal systems. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> This paper describes a vision-based method for recognizing the nonmanual information in Japanese Sign Language (JSL). This new modality information provides grammatical constraints useful for JSL word segmentation and interpretation. Our attention is focused on head motion, the most dominant non-manual information in JSL. We designed an interactive color-modeling scheme for robust face detection. Two video cameras are vertically arranged to take the frontal and profile image of the JSL user, and head motions are classified into eleven patterns. Moment-based feature and statistical motion feature are adopted to represent these motion patterns. Classification of the motion features is performed with linear discrimant analysis method. Initial experimental results show that the method has good recognition rate and can be realized in real-time. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> The parallel multistream model is proposed for integration sign language recognition and lip motion. The different time scales existing in sign language and lip motion can be tackled well using this approach. Primary experimental results have shown that this approach is efficient for integration of sign language recognition and lip motion. The promising results indicated that parallel multistream model can be a good solution in the framework of multimodal data fusion. An approach to recognize sign language with scalability with the size of vocabulary and a fast approach to locate lip corners are also proposed in this paper. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> Non-manual signals (NMS) are grammatical elements in sign languages. They may convey information that reinforces or is additional to the hand signing. NMS are similar to facial expressions except that, unlike spontaneous emotions, NMS are deliberate gestures. This paper explores the use of Independent Component Analysis (ICA) and Gabor wavelet networks (GWNs) for recognising 3 upper face and 3 lower face expressions related to NMS. Independent component analysis and Gabor wavelet networks were compared as representations for these facial signals. Both representations provided good recognition performance. The method of using GWNs with 116 wavelets outperformed ICA (85.3% and 93.3% for upper and lower face respectively, compared to 78.7% and 92% for ICA). However, the GWN method is computationally more expensive. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> Integration of Manual Signing and Nonmanual Signals <s> An automated system for detection of head movements is described The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases. <s> BIB009
Results from the analysis of NMS need to be integrated with recognition results of the hand gestures in order to extract all the information expressed. Our search for works in automatic NMS analysis revealed none that capture the information from all the nonmanual cues of facial expression, head and body posture and movement. Some classify facial expression only BIB008 , BIB001 , BIB002 , while others classify head movement only BIB009 , BIB006 . Of these, there are only a couple of works which consider combining information extracted from nonmanual cues with results of gesture recognition. Ma et al. BIB007 modeled features extracted from lip motion and hand gestures with separate HMM channels using a modified version of Bourlard's multistream model BIB004 and resembling Vogler's Parallel HMM . Viterbi scores from each channel are combined at sign boundaries where synchronization occurs. The different time scales of hand gestures and lip motion were accounted for by having different number of states for the same phrase/sign in each channel. In experiments where the lip motion expressed the same word (in spoken Chinese) as the gestured sign, 9 out of 10 phrases which were incorrectly recognized with hand gesture modeling alone, were correctly recognized when lip motion was also modeled. There are several issues involved in integrating information from NMS with sign gesture recognition. In BIB007 , the assumption was that each phrase uttered by the lips coincides with a sign/phrase in the gesture. However, in general NMS may co-occur with one or more signs/ phrases, and hence a method for dealing with the different time scales in such cases is required. Also, in BIB007 , the lip motion and hand gesturing convey identical information, while in general, NMS convey independent information, and the recognition results of NMS may not always serve to disambiguate results of hand gesture recognition. In fact, NMS often independently convey information in multiple channels through upper and lower face expressions, and head and body movements. Multiple cameras may be required to capture the torso's movement and still obtain good resolution images of the face for facial expression analysis. While some of the schemes employed in general multimodal integration research might be useful for application to this domain, we note that most of these schemes involve at most two channels of information, one of which is generally speech/voice ( BIB003 , BIB005 ). It remains to be seen whether these can be applied to the multiple channels of information conveyed by NMS and hand gesturing in SL.
Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Abstract This paper describes a method of classifying single view deaf-and-mute sign language motion images. We suppose the sign language word is composed of a time sequence of units called cheremes. The chereme is described by handshape, movement, and location of the hand, which can be said to express the 3-D features of the sign language. First, a dictionary for recognizing the sign language is made based on the cheremes. Then, the macro 2-D features of the location of a hand and its movement are extracted from the red component of the input color image sequence. Further, the micro 2-D features of the shape of the hand are also extracted if necessary. The 3-D feature descriptions of the dictionary are converted into 2-D image features, and the input sign language image is classified according to the extracted features of the 2-D image. <s> BIB001 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> A gesture recognition method for Japanese sign language is presented. We have developed a posture recognition system using neural networks which could recognize a finger alphabet of 42 symbols. We then developed a gesture recognition system where each gesture specifies a word. Gesture recognition is more difficult than posture recognition because it has to handle dynamic processes. To deal with dynamic processes we use a recurrent neural network. Here, we describe a gesture recognition method which can recognize continuous gesture. We then discuss the results of our research. <s> BIB002 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlove mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger in relation to the metacarpophalangeal joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling. > <s> BIB003 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> A new pattern matching method, the partly-hidden Markov model, is proposed for gesture recognition. The hidden Markov model, which is widely used for the time series pattern recognition, can deal with only piecewise stationary stochastic process. We solved this problem by introducing the modified second order Markov model, in which the first state is hidden and the second one is observable. As shown by the results of 6 sign-language recognition test, the error rate was improved by 73% compared with normal HMM. <s> BIB004 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper documents the recognition method of deciphering Japanese sign language(JSL) using projected images. The goal of the movement recognition is to foster communication between hearing impaired and people capable of normal speech. We uses a stereo camera for recording three-dimensional movements, a image processing board for tracking movements, and a personal computer for an image processor charting the recognition of JSL patterns. This system works by formalizing tile space area of the signers according to the characteristics of the human body, determining components such as location and movements, and then recognizing sign language patterns. The system is able to recognize JSL by determining the extent of similarities in the sign field, and does so even when vibrations in hand movements occur and when there are differences in body build. We obtained useful results from recognition experiments in 38 different JSL in two signers. <s> BIB005 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper is concerned with the video-based recognition of signs. Concentrating on the manual parameters of sign language, the system aims for the signer dependent recognition of 262 different signs taken from Sign Language of the Netherlands. For Hidden Markov Modelling a sign is considered a doubly stochastic process, represented by an unobservable state sequence. The observations emitted by the states are regarded as feature vectors, that are extracted from video frames. This work deals with three topics: Firstly the recognition of isolated signs, secondly the influence of variations of the feature vector on the recognition rate and thirdly an approach for the recognition of connected signs. The system achieves recognition rates up to 94% for isolated signs and 73% for a reduced vocabulary of connected signs. <s> BIB006 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper describes the development of a corpus or database of hand-arm pointing gestures, considered as a basic element for gestural communication. The structure of the corpus is defined for natural pointing movements carried out in different directions, heights and amplitudes. It is then extended to movement primitives habitually used in sign language communication. The corpus is based on movements recorded using an optoelectronic recording system that allows the 3D description of movement trajectories in space. The main technical characteristics of the capture and pretreatment system are presented, and perspectives are highlighted for recognition and generation purposes. <s> BIB007 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The automatic recognition of sign language is an attractive prospect; the technology exists to make it possible, while the potential applications are exciting and worthwhile. To date the research emphasis has been on the capture and classification of the gestures of sign language and progress in that work is reported. However, it is suggested that there are some greater, broader research questions to be addressed before full sign language recognition is achieved. The main areas to be addressed are sign language representation (grammars) and facial expression recognition. <s> BIB008 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper presents a sign language recognition system which consists of three modules: model-based hand tracking, feature extraction, and gesture recognition using a 3D Hopfield neural network (HNN). The first one uses the Hausdorff distance measure to track shape-variant hand motion, the second one applies the scale and rotation-invariant Fourier descriptor to characterize hand figures, and the last one performs a graph matching between the input gesture model and the stored models by using a 3D modified HNN to recognize the gesture. Our system tests 15 different hand gestures. The experimental results show that our system can achieve above 91% recognition rate, and the recognition process time is about 10 s. The major contribution in this paper is that we propose a 3D modified HNN for gesture recognition which is more reliable than the conventional methods. <s> BIB009 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> A large vocabulary sign language interpreter is presented with real-time continuous gesture recognition of sign language using a data glove. Sign language, which is usually known as a set of natural language with formal semantic definitions and syntactic rules, is a large set of hand gestures that are daily used to communicate with the hearing impaired. The most critical problem, end-point detection in a stream of gesture input is first solved and then statistical analysis is done according to four parameters in a gesture: posture, position, orientation, and motion. The authors have implemented a prototype system with a lexicon of 250 vocabularies and collected 196 training sentences in Taiwanese Sign Language (TWL). This system uses hidden Markov models (HMMs) for 51 fundamental postures, 6 orientations, and 8 motion primitives. In a signer-dependent way, a sentence of gestures based on these vocabularies can be continuously recognized in real-time and the average recognition rate is 80.4%,. <s> BIB010 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures. <s> BIB011 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The paper describes a real-time system which tracks the uncovered/unmarked hands of a person performing sign language. It extracts the face and hand regions using their skin colors, computes blobs and then tracks the location of each hand using a Kalman filter. The system has been tested for hand tracking using actual sign-language motion by native signers. The experimental results indicate that the system is capable of tracking hands even while they are overlapping the face. <s> BIB012 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Apparatus and method are provided for automatically loading a desired quantity of flat, flexible bags into dispensing cartons which comprises, in combination, a stacker mounted adjacent to and aligned with a completed bag dispenser, said stacker having vibration means for automatically collecting a desired number of completed bags in an aligned stack, means for conveying said aligned stack of bags to a carton loading station, means for depositing a stiffening member over a predetermined portion of said aligned stack of bags, means for folding said stack of bags, means for conveying an empty carton from a continuous conveyor of empty cartons to said carton loading station, and means for automatically folding and stuffing said stack of bags into an empty carton at said carton loading station. <s> BIB013 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The parallel multistream model is proposed for integration sign language recognition and lip motion. The different time scales existing in sign language and lip motion can be tackled well using this approach. Primary experimental results have shown that this approach is efficient for integration of sign language recognition and lip motion. The promising results indicated that parallel multistream model can be a good solution in the framework of multimodal data fusion. An approach to recognize sign language with scalability with the size of vocabulary and a fast approach to locate lip corners are also proposed in this paper. <s> BIB014 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Abstract In this paper, we present a new approach to recognizing hand signs. In this approach, motion recognition (the hand movement) is tightly coupled with spatial recognition (hand shape). The system uses multiclass, multidimensional discriminant analysis to automatically select the most discriminating linear features for gesture classification. A recursive partition tree approximator is proposed to do classification. This approach combined with our previous work on hand segmentation forms a new framework which addresses the three key aspects of hand sign interpretation: hand shape, location, and movement. The framework has been tested to recognize 28 different hand signs. The experimental results show that the system achieved a 93.2% recognition rate for test sequences that had not been used in the training phase. It is shown that our approach provide performance better than that of nearest neighbor classification in the eigensubspace. <s> BIB015 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Automatic gesture recognition systems generally require two separate processes: a motion sensing process where some motion features are extracted from the visual input; and a classification process where the features are recognised as gestures. We have developed the Hand Motion Understanding (HMU) system that uses the combination of a 3D model-based hand tracker for motion sensing and an adaptive fuzzy expert system for motion classification. The HMU system understands static and dynamic hand signs of the Australian Sign Language (Auslan). This paper presents the hand tracker that extracts 3D hand configuration data with 21 degrees-of-freedom (DOFs) from a 2D image sequence that is captured from a single viewpoint, with the aid of a colour-coded glove. Then the temporal sequence of 3D hand configurations detected by the tracker is recognised as a sign by an adaptive fuzzy expert system. The HMU system was evaluated with 22 static and dynamic signs. Before training the HMU system achieved 91% recognition, and after training it achieved over 95% recognition. <s> BIB016 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Gesture based applications widely range from replacing the traditional mouse as a position device to virtual reality and communication with the deaf. The article presents a fuzzy rule based approach to spatio-temporal hand gesture recognition. This approach employs a powerful method based on hyperrectangutar composite neural networks (HRCNNs) for selecting templates. Templates for each hand shape are represented in the form of crisp IF-THEN rules that are extracted from the values of synaptic weights of the corresponding trained HRCNNs. Each crisp IF-THEN rule is then fuzzified by employing a special membership function in order to represent the degree to which a pattern is similar to the corresponding antecedent part. When an unknown gesture is to be classified, each sample of the unknown gesture is tested by each fuzzy rule. The accumulated similarity associated with all samples of the input is computed for each hand gesture in the vocabulary, and the unknown gesture is classified as the gesture yielding the highest accumulative similarity. Based on the method we can implement a small-sized dynamic hand gesture recognition system. Two databases which consisted of 90 spatio-temporal hand gestures are utilized for verifying its performance. An encouraging experimental result confirms the effectiveness of the proposed method. <s> BIB017 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Sign language is the language used by the deaf, which is a comparatively steadier expressive system composed of signs corresponding to postures and motions assisted by facial expression. The objective of sign language recognition research is to "see" the language of deaf. The integration of sign language recognition and sign language synthesis jointly comprise a "human-computer sign language interpreter", which facilitates the interaction between deaf and their surroundings. Considering the speed and performance of the recognition system, Cyberglove is selected as gesture input device in our sign language recognition system, Semi-Continuous Dynamic Gaussian Mixture Model (SCDGMM) is used as recognition technique, and a search scheme based on relative entropy is proposed and is applied to SCDGMM-based sign word recognition. Comparing with SCDGMM recognizer without searching scheme, the recognition time of SCDGMM recognizer with searching scheme reduces almost 15 times. <s> BIB018 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Since the human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research on view-independent object recognition. Due to the difficulties of the model-based approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of a small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set of predefined gesture commands, and it is also extended to hand detection. This algorithm can also apply to other object recognition tasks. <s> BIB019 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Human motion recognition has many important applications, such as improved human-computer interaction and surveillance. A big problem that plagues this research area is that human movements can be very complex. Managing this complexity is difficult. We turn to American sign language (ASL) recognition to identify general methods that reduce the complexity of human motion recognition. We present a framework for continuous 3D ASL recognition based on linguistic principles, especially the phonology of ASL. This framework is based on parallel hidden Markov models (HMMs), which are able to capture both the sequential and the simultaneous aspects of the language. Each HMM is based on a single phoneme of ASL. Because the phonemes are limited in number, as opposed to the virtually unlimited number of signs that can be composed from them, we expect this framework to scale well to larger applications. We then demonstrate the general applicability of this framework to other human motion recognition tasks by extending it to gait recognition. <s> BIB020 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper is concerned with the automatic recognition of German continuous sign language. For the most user-friendliness only one single color video camera is used for image recording. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system design, which are in general based on subunits, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs will be outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits called fenone. K-means algorithm is used for the definition of such fenones. The software prototype of the system is currently evaluated in experiments. <s> BIB021 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper introduces a model-based hand gesture recognition system, which consists of three phases: feature extraction, training, and recognition. In the feature extraction phase, a hybrid technique combines the spatial (edge) and the temporal (motion) information of each frame to extract the feature images. Then, in the training phase, we use the principal component analysis (PCA) to characterize spatial shape variations and the hidden Markov models (HMM) to describe the temporal shape variations. A modified Hausdorff distance measurement is also applied to measure the similarity between the feature images and the pre-stored PCA models. The similarity measures are referred to as the possible observations for each frame. Finally, in recognition phase, with the pre-trained PCA models and HMM, we can generate the observation patterns from the input sequences, and then apply the Viterbi algorithm to identify the gesture. In the experiments, we prove that our method can recognize 18 different continuous gestures effectively. <s> BIB022 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The aim of this paper is to raise the ethical problems which appear when hearing computer scientists work on the Sign Languages (SL) used by the deaf communities, specially in the field of Sign Language recognition. On one hand, the problematic history of institutionalised SL must be known. On the other hand, the linguistic properties of SL must be learned by computer scientists before trying to design systems with the aim to automatically translate SL into oral or written language or vice-versa. The way oral language and SL function is so different that it seems impossible to work on that topic without a close collaboration with linguists specialised in SL and deaf people. <s> BIB023 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Research on recognition and generation of signed languages and the gestural component of spoken languages has been held back by the unavailability of large-scale linguistically annotated corpora of the kind that led to significant advances in the area of spoken language. A major obstacle has been the lack of computational tools to assist in efficient analysis and transcription of visual language data. Here we describe SignStream, a computer program that we have designed to facilitate transcription and linguistic analysis of visual language. Machine vision methods to assist linguists in detailed annotation of gestures of the head, face, hands, and body are being developed. We have been using SignStream to analyze data from native signers of American Sign Language (ASL) collected in our new video collection facility, equipped with multiple synchronized digital video cameras. The video data and associated linguistic annotations are being made publicly available in multiple formats. <s> BIB024 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> To describe non-manual signals (NMS's) of Japanese Sign Language (JSL), we have developed the notational system sIGNDEX. The notation describes both JSL words and NMS's. We specify characteristics of sIGNDEX in detail. We have also made a linguistic corpus that contains 100 JSL utterances. We show how sIGNDEX successfully describes not only manual signs but also NMS's that appear in the corpus. Using the results of the descriptions, we conducted statistical analyses of NMS's, which provide us with intriguing facts about frequencies and correlations of NMS's. <s> BIB025 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This paper deals with the automatic recognition of German signs. The statistical approach is based on the Bayes decision rule for minimum error rate. Following speech recognition system designs, which are in general based on phonemes, here the idea of an automatic sign language recognition system using subunits rather than models for whole signs is outlined. The advantage of such a system will be a future reduction of necessary training material. Furthermore, a simplified enlargement of the existing vocabulary is expected, as new signs can be added to the vocabulary database without re-training the existing hidden Markov models (HMMs) for subunits. Since it is difficult to define subunits for sign language, this approach employs totally self-organized subunits. In first experiences a recognition accuracy of 92,5% was achieved for 100 signs, which were previously trained. For 50 new signs an accuracy of 81% was achieved without retraining of subunit-HMMs. <s> BIB026 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Hitherto, the major challenge to sign language recognition is how to develop approaches that scale well with increasing vocabulary size. We present an approach to large vocabulary, continuous Chinese sign language (CSL) recognition that uses phonemes instead of whole signs as the basic units. Since the number of phonemes is limited, HMM-based training and recognition of the CSL signal becomes more tractable and has the potential to recognize enlarged vocabularies. Furthermore, the proposed method facilitates the CSL recognition when the finger-alphabet is blended with gestures. About 2400 phonemes are defined for CSL. One HMM is built for each phoneme, and then the signs are encoded based on these phonemes. A decoder that uses a tree-structured network is presented. Clustering of the Gaussians on the states, the language model and N-best-pass is used to improve the performance of the system. Experiments on a 5119 sign vocabulary are carried out, and the result is exciting. <s> BIB027 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> A new method to recognize continuous sign language based on hidden Markov model is proposed. According to the dependence of linguistic context, connections between elementary subwords are classified as strong connection and weak connection. The recognition of strong connection is accomplished with the aid of subword trees, which describe the connection of subwords in each sign language word. In weak connection, the main problem is how to extract the best matched subwords and find their end-points with little help of context information. The proposed method improves the summing process of the Viterbi decoding algorithm which is constrained in every individual model, and compares the end score at each frame to find the ending frame of a subword. Experimental results show an accuracy of 70% for continuous sign sentences that comprise no more than 4 subwords. <s> BIB028 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Principle Component Analysis (PCA) and Multiple Discriminant Analysis (MDA) have long been used for the appearance-based hand posture recognition. In this paper, we propose a novel PCA/MDA scheme for hand posture recognition. Unlike other PCA/MDA schemes, the PCA layer acts as a crude classification. Since posture alone cannot provide sufficient discriminating information, each input pattern will be given a likelihood of being in the nodes of PCA layers, instead of a strict division. Based on the Expectation-Maximization (EM) algorithm, we introduce three methods to estimate the parameters for this crude classification during training. The experiments on a 110-sign vocabulary show a significant improvement compared with the global PCA/MDA. <s> BIB029 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> We present an algorithm for extracting and classifying two-dimensional motion in an image sequence based on motion trajectories. First, a multiscale segmentation is performed to generate homogeneous regions in each frame. Regions between consecutive frames are then matched to obtain two-view correspondences. Affine transformations are computed from each pair of corresponding regions to define pixel matches. Pixels matches over consecutive image pairs are concatenated to obtain pixel-level motion trajectories across the image sequence. Motion patterns are learned from the extracted trajectories using a time-delay neural network. We apply the proposed method to recognize 40 hand gestures of American Sign Language. Experimental results show that motion patterns of hand gestures can be extracted and recognized accurately using motion trajectories. <s> BIB030 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Sign language is characterized by its interactivity and multimodality, which cause difficulties in data collection and annotation. To address these difficulties, we have developed a video-based Japanese sign language (JSL) corpus and a corpus tool for annotation and linguistic analysis. As the first step of linguistic annotation, we transcribed manual signs expressing lexical information as well as non-manual signs (NMSs) including head movements, facial actions, and posture that are used to express grammatical information. Our purpose is to extract grammatical rules from this corpus for the sign-language translation system underdevelopment. From this viewpoint, we will discuss methods for collecting elicited data, annotation required for grammatical analysis, as well as corpus tool required for annotation and grammatical analysis. As the result of annotating 2800 utterances, we confirmed that there are at least 50 kinds of NMSs in JSL, using head (seven kinds), jaw (six kinds), mouth (18 kinds), cheeks (one kind), eyebrows (four kinds), eyes (seven kinds), eye gaze (two kinds), bydy posture (five kinds). We use this corpus for designing and testing an algorithm and grammatical rules for the sign-language translation system underdevelopment. <s> BIB031 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> The aim of this paper is to specify some of the problems raised by the design of a gesture recognition system dedicated to Sign Language, and to propose suited solutions. The three topics considered here concern the simultaneity of information conveyed by manual signs, the possible temporal or spatial synchronicity between the two hands, and the different classes of signs that may be encountered in a Sign Language sentence. <s> BIB032 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> Abstract In this paper, we introduce a hand gesture recognition system to recognize continuous gesture before stationary background. The system consists of four modules: a real time hand tracking and extraction, feature extraction, hidden Markov model (HMM) training, and gesture recognition. First, we apply a real-time hand tracking and extraction algorithm to trace the moving hand and extract the hand region, then we use the Fourier descriptor (FD) to characterize spatial features and the motion analysis to characterize the temporal features. We combine the spatial and temporal features of the input image sequence as our feature vector. After having extracted the feature vectors, we apply HMMs to recognize the input gesture. The gesture to be recognized is separately scored against different HMMs. The model with the highest score indicates the corresponding gesture. In the experiments, we have tested our system to recognize 20 different gestures, and the recognizing rate is above 90%. <s> BIB033 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> We build upon a constrained, lab-based Sign Languagerecognition system with the goal of making it a mobile assistivetechnology. We examine using multiple sensors for disambiguationof noisy data to improve recognition accuracy.Our experiment compares the results of training a smallgesture vocabulary using noisy vision data, accelerometerdata and both data sets combined. <s> BIB034 </s> Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning <s> DISCUSSION <s> This work discusses an approach for capturing and translating isolated gestures of American Sign Language into spoken and written words. The instrumented part of the system combines an AcceleGlove and a two-link arm skeleton. Gestures of the American Sign Language are broken down into unique sequences of phonemes called poses and movements, recognized by software modules trained and tested independently on volunteers with different hand sizes and signing ability. Recognition rates of independent modules reached up to 100% for 42 postures, orientations, 11 locations and 7 movements using linear classification. The overall sign recognizer was tested using a subset of the American Sign Language dictionary comprised by 30 one-handed signs, achieving 98% accuracy. The system proved to be scalable: when the lexicon was extended to 176 signs and tested without retraining, the accuracy was 95%. This represents an improvement over classification based on hidden Markov models (HMMs) and neural networks (NNs). <s> BIB035
In the Gesture Workshop of 1997, Edwards BIB008 identified two aspects of SL communication that had often been overlooked by researchers-facial expression and the use of space and spatial relationships in signing, especially with regard to classifier signs. In the ensuing period, although there has been some work to tackle these aspects, the focus of research continues to be elsewhere and hence progress has been limited. Among the facial expression recognition works surveyed, none were capable of recognizing and interpreting upper face and lower face expressions from video sequences, while simultaneously modeling the dynamics and intensity of expressions. A few works recognize head movements, particularly nods and shakes, but none interpret the body movements in NMS. Apart from BIB014 which sought to improve sign gesture recognition results by combining with lip reading, we are not aware of other work reporting results of integrating NMS and hand gestures. Works that interpret sign gestures whose form and manner of movement convey grammatical information mostly focused on spatial variations of the sign's movement. None of the works surveyed gave experimental results for intepretation of the mimetic classifier signs mentioned by Edwards BIB008 and Bossard et al. BIB032 , . It is obvious from the discussion in Section 3.4.2 that this aspect of signing has not received attention. Current systems that only consider the citation form of signs would miss important information conveyed in natural signing, such as movement dynamics that convey temporal aspect and spatial variations that convey subject-object agreement. Worse still, since current systems do not account for spatial relationships between signs, some signs would be completely undecipherable, for example classifier signs that describe spatial relationships between objects, or signs that point to a location that had previously been established as a referrant position. Noun-verb pairs like SEAT and SIT would be confused since the only difference between them is in the repetitive motion of the noun. Two issues that have received much attention are recognition of continuous signing in sentences (Section 3.4.1) and scaling to large sign vocabularies. To handle large vocabularies with limited training data, some researchers used the idea of sequential subunits ( BIB021 , BIB026 , BIB027 , BIB028 ), while others decomposed a sign gesture into its simultaneous components (Table 3) . Notably, Vogler did both-sign gestures were modeled as simultaneous, parallel channels of information which were each in turn modeled with sequential subunits. The largest vocabulary reported in experiments was 5,119 CSL signs in Wang et al. BIB027 . In contrast, many of the other works are limited in the vocabulary size they can handle due to only using a subset of the information necessary for recognizing a comprehensive vocabulary. For example, it is common for input data to be from one hand only ( , BIB033 , BIB015 , BIB029 , BIB035 , BIB016 , BIB009 , BIB022 , , BIB004 , BIB010 , BIB002 , BIB001 , BIB011 , BIB003 ). Matsuo et al. BIB005 and Yang et al. BIB030 used input from both hands but only measured position and motion data. A few of the works used only hand appearance features as input without any position or orientation data ( BIB017 , BIB018 , BIB029 , BIB016 , BIB022 ). Even though all these works reported good results for sign recognition (possibly arising from either choice of vocabulary or some inherent information redundancy in gesture components), the existence of minimal sign pairs means that recognition of a comprehensive sign vocabulary is not possible without input from all the gesture components. From Tables 2 and 3 , we see that vision-based approaches have tended to experiment with smaller vocabulary sizes as compared to direct-measure device approaches. The largest vocabulary size used was 262 in the recognition of isolated signs of the Netherlands SL BIB006 . This could be due to the difficulty in simultaneously extracting whole hand movement features and detailed hand appearance features from images. Most works that localize and track hand movement, extract gross local features derived from the hand silhouette or contour. Thus, they may not be able to properly distinguish handshape and 3D hand orientation. Furthermore, handshape classification from multiple viewpoints is very difficult to achieve-Wu and Huang BIB019 being one of the few to do so, although on a limited number (14) of handshapes. Many of the vision-based approaches achieved fairly good recognition results but at the expense of very restrictive image capture environments and, hence, robustness is a real problem. An interesting direction to overcome this limitation was taken in the wearable system of Brashear et al. BIB034 , where features from both vision and accelerometer data were used to classify signs. Signing was done in relatively unconstrained environments, i.e., while the signer was moving about in natural everyday settings. Continuous sentences constructed from a vocabulary of five signs were recognized with 90.5 percent accuracy, an improvement over using vision only data (52.4 percent) and accelerometer only data (65.9 percent). Low accuracy and precision in direct-measure devices can also affect recognition rate, a possibility in Kadous as PowerGloves which have coarse sensing were used. At present, it is difficult to directly compare recognition results reported in the literature. Factors that could influence results include restrictions on vocabulary (to avoid minimal pairs or signs performed near the face), slower than normal signing speed, and unnatural signing to avoid occlusion. Unfortunately, this kind of experimental information is usually not reported. Another important issue is that very few systems have used data from native signers. Some exceptions are Imagawa et al. BIB012 and Tamura and Kawasaki BIB001 . Tanibata et al. used a professional interpreter. Braffort BIB023 made the point that the goal of recognizing natural signing requires close collaboration with native signers and SL linguists. Also, as the field matures, it is timely to tackle the problem of reproducibility by establishing standard databases. There are already some efforts in this direction. Neidle et al. BIB024 describe a corpus of native ASL signing that is being collected for the purpose of linguistic research as well as for aiding vision-based sign recognition research. Other efforts in this direction include BIB007 , BIB025 , BIB031 . We mentioned in the introduction that methods developed to solve problems in SL recognition can be applied to non-SL domains. An example of this is Nam and Wohn's work ( BIB013 ) on recognizing deictic, mimetic and pictographic gestures. Each gesture was broken down into attributes of handshape, hand orientation, and movement in a manner similar to decomposing sign gestures into their components. They further decomposed movement into sequential subunits of movement primitives and HMMs were employed to explicitly model connecting movements, similar to the approach in . In BIB020 , Vogler et al. applied the framework of decomposing movement into sequential subunits for the analysis of human gait. Three different gaits (walking on level terrain, up a slope, down a slope) were distinguished by analyzing all gaits as consisting of subunits (half-steps) and modeling the subunits with HMMs.
Survey paper on intrusion detection techniques <s> 4) FUZZY CLUSTERING FOR IDS: <s> In his paper, we introduce a novel technique, called F-APACS, for mining jkzy association rules. &istlng algorithms involve discretizing the domains of quantitative attrilmtes into intervals so as to discover quantitative association rules. i%ese intervals may not be concise and meaning@ enough for human experts to easily obtain nontrivial knowledge from those rules discovered. Instead of using intervals, F-APACS employs linguistic terms to represent the revealed regularities and exceptions. The linguistic representation is especially usefil when those rules discovered are presented to human experts for examination. The definition of linguistic terms is based onset theory and hence we call the rides having these terms fuzzy association rules. The use of fq techniques makes F-APACS resilient to noises such as inaccuracies in physical measurements of real-life entities and missing values in the databases. Furthermore, F-APACS employs adjusted difference analysis which has the advantage that it does not require any user-supplied thresholds which are often hard to determine. The fact that F-APACS is able to mine fiuy association rules which utilize linguistic representation and that it uses an objective yet meanhg@ confidence measure to determine the interestingness of a rule makes it vety effective at the discovery of rules from a real-life transactional database of a PBX system provided by a telecommunication corporation <s> BIB001 </s> Survey paper on intrusion detection techniques <s> 4) FUZZY CLUSTERING FOR IDS: <s> The Fuzzy Intrusion Recognition Engine (FIRE) is a network intrusion detection system that uses fuzzy systems to assess malicious activity against computer networks. The system uses an agent-based approach to separate monitoring tasks. Individual agents perform their own fuzzification of input data sources. All agents communicate with a fuzzy evaluation engine that combines the results of individual agents using fuzzy rules to produce alerts that are true to a degree. Several intrusion scenarios are presented along with the fuzzy systems for detecting the intrusions. The fuzzy systems are tested using data obtained from networks under simulated attacks. The results show that fuzzy systems can easily identify port scanning and denial of service attacks. The system can be effective at detecting some types of backdoor and Trojan horse attacks. <s> BIB002
To The underline premise of our intrusion detection model is to describe attacks as instances of an ontology using a semantically rich language like DAML. This ontology capture information attacks such as the system component it affects, the consequences the attacks the mean of attack the location of attacker. Such target -centric ontology has been developed by under conferral, hence our intrusion detection model consist of two phases. The initial phase's data mining techniques to analyze data stream that capture process, system and network states and detect anomalous behavior and the second or high level phase reasons over data that is representative of the anomaly defined as instance of ontology. One way to build the models from these data streams is to use fuzzy clustering in which dissimilar matrix of object to be cluster as input. The objective function are based on selecting, representative objects from the features set in such a way that total fuzzy dissimilarity within each cluster is minimized BIB002 BIB001 .
Survey paper on intrusion detection techniques <s> F. Intrusion Detection based on K-Means Clustering and OneR Classification [19] <s> The process of monitoring the events occurring in a computer system or network and analyzing them for sign of intrusions is known as intrusion detection system (IDS). This paper presents two hybrid approaches for modeling IDS. Decision trees (DT) and support vector machines (SVM) are combined as a hierarchical hybrid intelligent system model (DT-SVM) and an ensemble approach combining the base classifiers. The hybrid intrusion detection model combines the individual base classifiers and other hybrid machine learning paradigms to maximize detection accuracy and minimize computational complexity. Empirical results illustrate that the proposed hybrid systems provide more accurate intrusion detection systems. <s> BIB001 </s> Survey paper on intrusion detection techniques <s> F. Intrusion Detection based on K-Means Clustering and OneR Classification [19] <s> Intrusion detection is a necessary step to identify unusual access or attacks to secure internal networks. In general, intrusion detection can be approached by machine learning techniques. In literature, advanced techniques by hybrid learning or ensemble methods have been considered, and related work has shown that they are superior to the models using single machine learning techniques. This paper proposes a hybrid learning model based on the triangle area based nearest neighbors (TANN) in order to detect attacks more effectively. In TANN, the k-means clustering is firstly used to obtain cluster centers corresponding to the attack classes, respectively. Then, the triangle area by two cluster centers with one data from the given dataset is calculated and formed a new feature signature of the data. Finally, the k-NN classifier is used to classify similar attacks based on the new feature represented by triangle areas. By using KDD-Cup '99 as the simulation dataset, the experimental results show that TANN can effectively detect intrusion attacks and provide higher accuracy and detection rates, and the lower false alarm rate than three baseline models based on support vector machines, k-NN, and the hybrid centroid-based classification model by combining k-means and k-NN. <s> BIB002
The approach, KM+1R, combines the k-means clustering with the OneR classification technique. The KDD Cup '99 set is used as a simulation dataset. The result shows that our proposed approach achieve a better accuracy and detection rate, particularly in reducing the false alarm. Related work and research publications based on hybrid approaches have been widely explored such as in BIB002 . The detection rate (DR), false positive (FP), false negative (FN), true positive (TP), false alarm (FA), and accuracy for each approach are also investigated. Each approach has distinctive strengths and weakness. Some approaches possess strength in detection but high in false alarm and vice versa. For instance, in the author proposed a new three-level decision tree classification, which focuses on the detection rate. Authors BIB001 model the IDS using a hierarchical hybrid intelligent system with the combination of decision tree and support vector machine (DT-SVM). While DT-SVM produces high detection rate, it lacks in the ability to differentiate attacks from normal behavior. More recently, approach as suggested by author BIB002 offers a high detection rate but comes with high false alarm rate as compared to others. In short, a number of hybrid techniques have been proposed in intrusion detection fields and related work; but there are still room to improve the accuracy and detection rate as well as the false alarm rate The main goal to utilize K-Means clustering approach is to split and to group data into normal and attack instances. K-Means clustering methods partition the input dataset into k-clusters according to an initial value known as the seed points into each cluster's centroids or cluster centers. The mean value of numerical data contained within each cluster is called centroids. The K-Means algorithm works as follows: 1. Select initial centers of the K clusters. Repeat step 2 through 3 until the cluster membership stabilizes. 2. Generate a new partition by assigning each data to its closest cluster centers.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> We introduce a Deep Stochastic IOC RNN Encoder-decoder framework, DESIRE, for the task of future predictions of multiple interacting agents in dynamic scenes. DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i.e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents. DESIRE achieves these in a single end-to-end trainable neural network model, while being computationally efficient. The model first obtains a diverse set of hypothetical future prediction samples employing a conditional variational auto-encoder, which are ranked and refined by the following RNN scoring-regression module. Samples are scored by accounting for accumulated future rewards, which enables better long-term strategic decisions similar to IOC frameworks. An RNN scene context fusion module jointly captures past motion histories, the semantic scene context and interactions among multiple agents. A feedback mechanism iterates over the ranking and refinement to further boost the prediction accuracy. We evaluate our model on two publicly available datasets: KITTI and Stanford Drone Dataset. Our experiments show that the proposed model significantly improves the prediction accuracy compared to other baseline methods. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> In this paper, we propose an efficient vehicle trajectory prediction framework based on recurrent neural network. Basically, the characteristic of the vehicle's trajectory is different from that of regular moving objects since it is affected by various latent factors including road structure, traffic rules, and driver's intention. Previous state of the art approaches use sophisticated vehicle behavior model describing these factors and derive the complex trajectory prediction algorithm, which requires a system designer to conduct intensive model optimization for practical use. Our approach is data-driven and simple to use in that it learns complex behavior of the vehicles from the massive amount of trajectory data through deep neural network model. The proposed trajectory prediction method employs the recurrent neural network called long short-term memory (LSTM) to analyze the temporal behavior and predict the future coordinate of the surrounding vehicles. The proposed scheme feeds the sequence of vehicles' coordinates obtained from sensor measurements to the LSTM and produces the probabilistic information on the future location of the vehicles over occupancy grid map. The experiments conducted using the data collected from highway driving show that the proposed method can produce reasonably good estimate of future trajectory. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> As part of a complete software stack for autonomous driving, NVIDIA has created a neural-network-based system, known as PilotNet, which outputs steering angles given images of the road ahead. PilotNet is trained using road images paired with the steering angles generated by a human driving a data-collection car. It derives the necessary domain knowledge by observing human drivers. This eliminates the need for human engineers to anticipate what is important in an image and foresee all the necessary rules for safe driving. Road tests demonstrated that PilotNet can successfully perform lane keeping in a wide variety of driving conditions, regardless of whether lane markings are present or not. ::: The goal of the work described here is to explain what PilotNet learns and how it makes its decisions. To this end we developed a method for determining which elements in the road image most influence PilotNet's steering decision. Results show that PilotNet indeed learns to recognize relevant objects on the road. ::: In addition to learning the obvious features such as lane markings, edges of roads, and other cars, PilotNet learns more subtle features that would be hard to anticipate and program by engineers, for example, bushes lining the edge of the road and atypical vehicle classes. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> To safely and efficiently navigate through complex traffic scenarios, autonomous vehicles need to have the ability to predict the future motion of surrounding vehicles. Multiple interacting agents, the multi-modal nature of driver behavior, and the inherent uncertainty involved in the task make motion prediction of surrounding vehicles a challenging problem. In this paper, we present an LSTM model for interaction aware motion prediction of surrounding vehicles on freeways. Our model assigns confidence values to maneuvers being performed by vehicles and outputs a multi-modal distribution over future motion based on them. We compare our approach with the prior art for vehicle motion prediction on the publicly available NGSIM US-101 and I-80 datasets. Our results show an improvement in terms of RMS values of prediction error. We also present an ablative analysis of the components of our proposed model and analyze the predictions made by the model in complex traffic scenarios. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> Recent algorithmic improvements and hardware breakthroughs resulted in a number of success stories in the field of AI impacting our daily lives. However, despite its ubiquity AI is only just starting to make advances in what may arguably have the largest impact thus far, the nascent field of autonomous driving. In this work we discuss this important topic and address one of crucial aspects of the emerging area, the problem of predicting future state of autonomous vehicle's surrounding necessary for safe and efficient operations. We introduce a deep learning-based approach that takes into account current state of traffic actors and produces rasterized representations of each actor's vicinity. The raster images are then used by deep convolutional models to infer future movement of actors while accounting for inherent uncertainty of the prediction task. Extensive experiments on real-world data strongly suggest benefits of the proposed approach. Moreover, following successful tests the system was deployed to a fleet of autonomous vehicles. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> Predicting trajectories of pedestrians is quintessential for autonomous robots which share the same environment with humans. In order to effectively and safely interact with humans, trajectory prediction needs to be both precise and computationally efficient. In this work, we propose a convolutional neural network (CNN) based human trajectory prediction approach. Unlike more recent LSTM-based moles which attend sequentially to each frame, our model supports increased parallelism and effective temporal representation. The proposed compact CNN model is faster than the current approaches yet still yields competitive results. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Using Neural Networks <s> Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact. Self-driving vehicles (SDVs) are expected to prevent road accidents and save millions of lives while improving the livelihood and life quality of many more. However, despite large interest and a number of industry players working in the autonomous domain, there still remains more to be done in order to develop a system capable of operating at a level comparable to best human drivers. One reason for this is high uncertainty of traffic behavior and large number of situations that an SDV may encounter on the roads, making it very difficult to create a fully generalizable system. To ensure safe and efficient operations, an autonomous vehicle is required to account for this uncertainty and to anticipate a multitude of possible behaviors of traffic actors in its surrounding. We address this critical problem and present a method to predict multiple possible trajectories of actors while also estimating their probabilities. The method encodes each actor’s surrounding context into a raster image, used as input by deep convolutional networks to automatically derive relevant features for the task. Following extensive offline evaluation and comparison to state-of-the-art baselines, the method was successfully tested on SDVs in closed-course tests. <s> BIB007
In terms of classifying objects from images, neural networks have seen a steady rise in popularity in recent years, particularly the more elaborate and complex convolutional and recurrent networks from the field of deep learning. Neural networks have the advantage of being able to learn important and robust features given training data that is relevant and in sufficient quantity. Considering that a significant percentage of automotive sensor data consists of images, convolutional neural networks (CNN) are seeing widespread use in the related literature, for both classification and tracking problems. The advantage of CNNs over more conventional classifiers lies in the convolutional layers, where various filters and feature maps are obtained during training. CNNs are capable of learning object features by means of multiple complex operations and optimizations, and the appropriate choice of network parameters and architecture can ensure that these features contain the most useful correlations that are needed for the robust identification of the targeted objects. While this choice is most often an empirical process, a wide assortment of network configurations exist in the related literature that are aimed at solving classification and tracking problems, with high accuracies claimed by the authors. Where object identification is concerned, in some cases the output of the fully-connected component of the CNN is used, while in other situations the values of the hidden convolutional layers are exploited in conjunction with other filtering and refining methods. Many of the approaches presented in the literature that are based on neural networks use either recurrent neural network (RNNs) which explicitly take into account a history composed of the past states of the actors, or simpler convolutional neural networks (CNNs). One of the most interesting systems, albeit quite complex, is DESIRE BIB001 , which has the goal of predicting the future locations of multiple interacting agents in dynamic (driving) scenes. It considers the multi-modal nature of the future prediction, i.e. given the same context, the future may vary. It may foresee the potential future outcomes and make a strategic prediction based on that, and it can reason not only from the past motion history, but also from the scene context as well as the interactions among the agents. DESIRE achieves these goals in a single end-to-end trainable neural network model, while being computationally efficient. Using a deep learning framework, DESIRE can simultaneously: generate diverse hypotheses to reflect a distribution over plausible futures, reason about the interactions between multiple dynamic objects and the scene context, and rank and refine hypotheses with consideration of long-term future rewards. The corresponding optimization problem tries to maximize the potential future reward of the prediction, using the following mechanisms ( Figure 9 ): 1. Diverse sample generation: a conditional variational auto-encoder (CVAE) is used to learn a sampling model that, given observations of past trajectories, produces a diverse set of prediction hypotheses to capture the multimodality of the space of plausible futures. The CVAE introduces a latent variable to account for the ambiguity of the future, which is combined with an RNN that encodes the past trajectories, to generate hypotheses using another RNN. Essentially, a CVAE introduces stochastic latent variables z i that are learned to encode a diverse set of predictions Y t given input X t , making it suitable for modeling one-to-many mappings; 2. IOC-based ranking and refinement: a ranking module determines the most likely hypotheses, while incorporating scene context and interactions. Since an optimal policy is hard to determine where multiple agents make strategic interdependent choices, the ranking objective is formulated to account for potential future rewards similar to inverse optimal control (IOC) or inverse reinforcement learning (IRL). This also ensures generalization to new situations further into the future, given limited training data. The module is trained in a multitask framework with a regression-based refinement of the predicted samples. In the testing phase, there are multiple iterations in order to obtain more accurate refinements of the future prediction. Predicting a distant future can be far more challenging than predicting a closer one. Therefore, an agent is trained to choose its actions that maximizes long-term rewards to achieve its goal. Instead of designing a reward function manually, IOC learns an unknown reward function. The RNN model assigns rewards to each prediction hypothesis and measures its goodness based on the accumulated long-term rewards; 3. Scene context fusion: this module aggregates the interactions between agents and the scene context encoded by a CNN. The fused embedding is channeled to the RNN scoring module and allows to produce the rewards based on the contextual information. In , a method to predict trajectories of surrounding vehicles is proposed using a long short-term memory (LSTM) network, with the goal of taking into account the relationship between the ego car and surrounding vehicles. The LSTM is a type of recurrent neural network (RNN) capable of learning long-term dependencies. Generally, an RNN has a vanishing gradient problem. An LSTM is able to deal with this through a forget gate, designed to control the information between the memory cells in order to store the most relevant previous data. The proposed method considers the ego car and four surrounding vehicles. It is assumed that drivers generally pay attention to the relative distance and speed with respect to the other cars when they intend to change a lane. Based on this assumption, the relative amounts between the target and the four surrounding vehicles are used as the input of the LSTM network. The feature vector x t at time Figure 10 : The architecture of the system BIB002 step t is defined by twelve features: lateral position of target vehicle, longitudinal position of target vehicle, lateral speed of target vehicle, longitudinal speed of target vehicle, relative distance between target and preceding vehicle, relative speed between target and preceding vehicle, relative distance between target and following vehicle, relative speed between target and following vehicle, relative distance between target and lead vehicle, relative speed between target and lead vehicle, relative distance between target and ego vehicle, and relative speed between target and ego vehicle. The input vector of the LSTM network is a sequence data with x t 's for past time steps. The output is the feature vector at the next time step t + 1. A trajectory is predicted by iteratively using the output result of the network as the input vector for the subsequent time step. In BIB002 an efficient trajectory prediction framework is proposed, which is also based on an LSTM. This approach is data-driven and learns complex behaviors of the vehicles from a massive amount of trajectory data. The LSTM receives the coordinates and velocities of the surrounding vehicles as inputs and produces probabilistic information about the future location of the vehicles over an occupancy grid map ( Figure 10 ). The experiments show that the proposed method has better prediction accuracy than Kalman filtering. The occupancy grid map is widely adopted for probabilistic localization and mapping. It reflects the uncertainty of the predicted trajectories. In BIB002 , the occupancy grid map is constructed by partitioning the range under consideration into several grid cells. The grid size is determined such that a grid cell approximately covers the quarter lane to recognize the movement of the vehicle on same lane as well as length of the vehicle (Figure 11 ). When predictions are needed for different time ranges (e.g., ∆ = 0.5s, 1s, 2s), the LSTM is trained independently for each time range. The LSTM produces the probability of occupancy for each grid cell. Let (x, y) be a two dimensional index for the occupancy grid. Then the softmax layer in the i th LSTM produces the probability P o (i x , i y ) for the grid element (i x , i y ). Finally, the outputs of the n LSTMs are combined using . The probability of occupancy P o (i x , i y ) summarizes the prediction of the future trajectory for all n vehicles in the single map. Alternatively, the same LSTM architecture can be used to directly predict the coordinates of a vehicle as a regression task. Instead of using the softmax layer to compute probabilities, the system can produce two real coordinate values x and y. In BIB004 , another LSTM model is described for interaction-aware motion prediction. Confidence values are assigned to the maneuvers that are performed by vehicles. Based on them, a multi-modal distribution over future motions is computed. More specifically, the model assigns probabilities for different maneuver classes, and outputs maneuver specific predictions for each maneuver class. The LSTM uses as input the track histories of the ego vehicle and its surrounding vehicles, and the lane structure of the freeway. It assigns confidence values to six maneuver classes and predicts a multi-modal distribution of the possibilities of future motion. Taking into account the time constraints of a real-time system, BIB005 uses simple feed-forward CNN architectures for the prediction task. Instead of manually defining features that represent the context for each actor, the authors rasterize the scene for each actor into an RGB image. Then, they train the CNN using these rasterized images as inputs to predict the actors' trajectories, where the network automatically infers the relevant features. Optionally, the model can also take as input a current state of the actor represented as a vector containing velocity, acceleration, and heading change rate (position and heading are not required because they are implicitly included in the raster image), and concatenate the resulting vector with the flattened output of the base CNN. Finally, the combined features are passed through a fully connected layer. A similar approach is used in BIB007 , which presents a method to predict multiple possible trajectories of actors while also estimating their probabilities. It encodes each actor's surrounding context into a raster image, used as input by a deep convolutional network to automatically derive the relevant features for the task. Given the raster image and the state estimates of actors at a time step, the CNN is used to predict a multitude of possible future state sequences, as well as the probability of each sequence. As part of a complete software stack for autonomous driving, NVIDIA created a system based on a CNN, called PilotNet BIB003 , which outputs steering angles given images of the road ahead. This system is trained using road images paired with the steering angles generated by a human driving a car that collects data. The authors developed a method for determining which elements in the road image influence its steering decision the most. It seems that in addition to learning the obvious features such as lane markings, edges of roads and other cars, the system learns more subtle features that would be hard to anticipate and program by engineers, e.g., bushes lining the edge of the road and atypical vehicle classes, while ignoring structures in the camera images that are not relevant to driving. This capability is derived from data without the need of hand-crafted rules. In , the authors propose a learnable end-to-end model with a deep neural network that reasons about both high level behavior and long-term trajectories. Inspired by how humans perform this task, the network exploits motion and prior knowledge about the road topology in the form of maps containing semantic elements such as lanes, intersections and traffic lights. The so-called IntentNet is a fully-convolutional neural network that outputs three types of variables in a single forward pass corresponding to: detection scores for vehicle and background classes, high level action probabilities corresponding to discrete intentions, and bounding box regressions in the current and future time steps to represent the intended trajectory. This design enables the system to propagate uncertainty through the different components and is reported to be computationally efficient. A CNN is also used in BIB006 for an end-to-end trajectory prediction model which is competitive with more complicated state-of-the-art LSTM-based techniques which require more contextual information. Highly parallelizable convolutional layers are employed to handle temporal dependencies. The CNN is a simple sequence-to-sequence architecture. Trajectory histories are used as input and embedded to a fixed size through a fully-connected layer. The convolutional layers are stacked and used to enforce temporal consistency. Finally, the features from the final convolutional layer are concatenated and passed through a fully-connected layer to generate all predicted positions at once. The authors found out that predicting one time step at a time leads to worse results than predicting all future times at once. A possible reason is that the error of the current prediction is propagated forward in time in a highly correlated fashion.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> Visual object tracking is a challenging computer vision problem with numerous real-world applications. This paper investigates the impact of convolutional features for the visual tracking problem. We propose to use activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks. These activations have several advantages compared to the standard deep features (fully connected layers). Firstly, they miti-gate the need of task specific fine-tuning. Secondly, they contain structural information crucial for the tracking problem. Lastly, these activations have low dimensionality. We perform comprehensive experiments on three benchmark datasets: OTB, ALOV300++ and the recently introduced VOT2015. Surprisingly, different to image classification, our results suggest that activations from the first layer provide superior tracking performance compared to the deeper layers. Our results further show that the convolutional features provide improved results compared to standard hand-crafted features. Finally, results comparable to state-of-the-art trackers are obtained on all three benchmark datasets. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ... <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> In this paper, we propose a novel online multi-object tracking (MOT) framework, which exploits features from multiple convolutional layers. In particular, we use the top layer to formulate a category-level classifier and use a lower layer to identify instances from one category under the intuition that lower layers contain much more details. To avoid the computational cost caused by online fine-tuning, we train our appearance model with an offline learning strategy using the historical appearance reserved for each object. We evaluate the proposed tracking framework on a popular MOT benchmark to demonstrate the effectiveness and the state-of-the-art performance of our tracker. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual tracking. They only need a small set of training samples from the initial frame to generate an appearance model. However, existing DCFs learn the filters separately from feature extraction, and update these filters using a moving average operation with an empirical weight. These DCF trackers hardly benefit from the end-to-end training. In this paper, we propose the CREST algorithm to reformulate DCFs as a one-layer convolutional neural network. Our method integrates feature extraction, response map generation as well as model update into the neural networks for an end-to-end training. To reduce model degradation during online update, we apply residual learning to take appearance changes into account. Extensive experiments on the benchmark datasets demonstrate that our CREST tracker performs favorably against state-of-the-art trackers. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0% relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> Convolutional neural network (CNN) has drawn increasing interest in visual tracking owing to its powerfulness in feature extraction. Most existing CNN-based trackers treat tracking as a classification problem. However, these trackers are sensitive to similar distractors because their CNN models mainly focus on inter-class classification. To address this problem, we use self-structure information of object to distinguish it from distractors. Specifically, we utilize recurrent neural network (RNN) to model object structure, and incorporate it into CNN to improve its robustness to similar distractors. Considering that convolutional layers in different levels characterize the object from different perspectives, we use multiple RNNs to model object structure in different levels respectively. Extensive experiments on three benchmarks, OTB100, TC-128 and VOT2015, show that the proposed algorithm outperforms other methods. Code is released at www.dabi.temple.edu/hbling/code/SANet/SANet.html. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> The robustness of the visual trackers based on the correlation maps generated from convolutional neural networks can be substantially improved if these maps are used to employed in conjunction with a particle filter. In this article, we present a particle filter that estimates the target size as well as the target position and that utilizes a new adaptive correlation filter to account for potential errors in the model generation. Thus, instead of generating one model which is highly dependent on the estimated target position and size, we generate a variable number of target models based on high likelihood particles, which increases in challenging situations and decreases in less complex scenarios. Experimental results on the Visual Tracker Benchmark vl.0 demonstrate that our proposed framework significantly outperforms state-of-the-art methods. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> In this paper we present a new approach for efficient regression based object tracking which we refer to as Deep- LK. Our approach is closely related to the Generic Object Tracking Using Regression Networks (GOTURN) framework of Held et al. We make the following contributions. First, we demonstrate that there is a theoretical relationship between siamese regression networks like GOTURN and the classical Inverse-Compositional Lucas & Kanade (IC-LK) algorithm. Further, we demonstrate that unlike GOTURN IC-LK adapts its regressor to the appearance of the currently tracked frame. We argue that this missing property in GOTURN can be attributed to its poor performance on unseen objects and/or viewpoints. Second, we propose a novel framework for object tracking - which we refer to as Deep-LK - that is inspired by the IC-LK framework. Finally, we show impressive results demonstrating that Deep-LK substantially outperforms GOTURN. Additionally, we demonstrate comparable tracking performance to current state of the art deep-trackers whilst being an order of magnitude (i.e. 100 FPS) computationally efficient. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms. We compare favorably against strong classic and deep learning powered dense depth algorithms. <s> BIB010 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Learning Features from Convolutional Layers <s> In recent years, regression trackers have drawn increasing attention in the visual-object tracking community due to their favorable performance and easy implementation. The tracker algorithms directly learn mapping from dense samples around the target object to Gaussian-like soft labels. However, in many real applications, when applied to test data, the extreme imbalanced distribution of training samples usually hinders the robustness and accuracy of regression trackers. In this paper, we propose a novel effective distractor-aware loss function to balance this issue by highlighting the significant domain and by severely penalizing the pure background. In addition, we introduce a full differentiable hierarchy-normalized concatenation connection to exploit abstractions across multiple convolutional layers. Extensive experiments were conducted on five challenging benchmark-tracking datasets, that is, OTB-13, OTB-15, TC-128, UAV-123, and VOT17. The experimental results are promising and show that the proposed tracker performs much better than nearly all the compared state-of-the-art approaches. <s> BIB011
Many results from the related literature systematically demonstrate that convolutional features are more useful for tracking than other explicitly-computed ones (Haar, FHOG, color labeling etc.). An example in this sense is BIB004 , which handles MOT using combinations of values from convolutional layers located at multiple levels. The method is based on the notion that lower-level layers account for a larger portion of the input image and therefore contain more details from the identified objects, making them useful, for instance, for handling occlusion. Conversely, top-level layers are more representative of semantics and are useful in distinguishing objects from the background. The proposed CNN architecture uses dual fully-connected components, for higher and lower-level features, which handle instance-level and category-level classification, respectively ( Figure 1 ). The proper identification of objects, particularly where occlusion events occur, involves the generation of appearance models of the tracked objects, which can result from the appropriate processing of the features learned within in CNN. On a similar note, notes that the output of the fully-connected component of a CNN is not suitable for handling infrared images. Their attempt to directly transfer CNNs pretrained with traditional images for use with infrared sensor data is unsuccessful, since only the information from the convolutional layers seems to be useful for this purpose. Furthermore, the layer data itself requires some level of adaptation to the specifics of infrared images. Typically, infrared data offers much less spatial information than visual images, and is much more suited, for example, in depth sensors for gathering distances to objects, albeit at a significantly lower resolution compared to regular image acquisition. As such, convolutional layers from infrared images are used in conjunction with correlation filters to generate a set of weak trackers which provides response maps with regard to the targets' locations. The weak trackers are then combined in ensembles which form stronger response maps with a much greater tracking accuracy. The response map of an image is, in general terms, in an intensity image where higher intensities indicate a change or a desired feature/shape/structure in the initial image, when exposed to an operator or correlation filter of some kind. By matching or fusing responses from multiple images within a video sequence, one could identify similar objects (i.e. the same pedestrian) across the sequence and subsequently construct their trajectories. The potential of correlation filters is also exploitable for regular images. These have the potential to boost the information extracted from the activations of CNN layers, for instance in BIB001 , where the authors find that by applying the appropriate filters to information drawn from shallow CNN layers, a level of robustness similar to using deeper layers or a combination of multiple layers can be achieved. In BIB008 , the authors also note the added robustness obtainable by post-filtering convolutional layers. By using particle and correlation filters, basic geometric and spatial features can be deduced for the tracked objects, which, together with a means of adaptively generating variable models, can be made to handle both simple and complex scenes. An alternative approach can be found in BIB005 , where discriminative correlation filters are used to generate an appearance model from a small number of samples. The overall approach is similar, involving feature extraction, post-processing, the generation of response maps for carrying out better model updates within the neural network. Contrary to other similar results, the correlation filters used throughout the system are learned within a one-layer CNN, which eventually can be used to make predictions based on the response maps. Furthermore, residual learning is employed in order to avoid model degradation, instead of the much more frequently-used method of stacking multiple layers. Other tracking methods learn a similar kind of mapping from samples in the vicinity of the target object using deep regression BIB009 , BIB011 , or by estimating and learning depth information BIB010 . The authors of BIB002 note that correlation filters have limitations imposed by the feature map resolution and propose a novel solution where features are learned in a continuous domain, using an appropriate interpolation model. This allows for the more effective resolution-independent compositing of multiple feature maps, resulting in superior classification results. Methods based on discriminative correlation filters are notoriously prone to excessive complexity and overfitting, and various means are available for optimizing the more traditional methods. The most noteworthy in this sense is BIB006 , who employs efficient convolution operators, a training sample distribution scheme and an optimal update strategy in an attempt to boost performance and reduce the number of parameters. A promising result which demonstrates significant robustness and accuracy is BIB003 , who use a CNN where the first set of layers are shared, as in a standard CNN; however at some point the layers branch into multiple domain-specific ones. This approach has the benefit of splitting the tracking problem into subproblems which are solved separately in their respective layer sets. Each domain has its own training sequences and be customized to can address a specific issue (such as distinguishing a target with specific shape parameters from the background). A similar concept, i.e. a network with components distinctly trained for a specific problem, can be found in BIB007 . In this case, multiple recurrent layers are used to model different structural properties of the tracked objects, which are incorporated into a parent CNN with the same purpose of improving accuracy and robustness. The RNN layers generate what the authors refer to as "structurally-aware feature maps" which, when combined with pooled versions of their non-structurally aware counterparts, significantly improve the classification results.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Simple Online and Realtime Tracking (SORT) is a pragmatic approach to multiple object tracking with a focus on simple, effective algorithms. In this paper, we integrate appearance information to improve the performance of SORT. Due to this extension we are able to track objects through longer periods of occlusions, effectively reducing the number of identity switches. In spirit of the original framework we place much of the computational complexity into an offline pre-training stage where we learn a deep association metric on a large-scale person re-identification dataset. During online application, we establish measurement-to-track associations using nearest neighbor queries in visual appearance space. Experimental evaluation shows that our extensions reduce the number of identity switches by 45%, achieving overall competitive performance at high frame rates. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> In this paper, we propose a CNN-based framework for online MOT. This framework utilizes the merits of single object trackers in adapting appearance models and searching for target in the next frame. Simply applying single object tracker for MOT will encounter the problem in computational efficiency and drifted results caused by occlusion. Our framework achieves computational efficiency by sharing features and using ROI-Pooling to obtain individual features for each target. Some online learned target-specific CNN layers are used for adapting the appearance model for each target. In the framework, we introduce spatial-temporal attention mechanism (STAM) to handle the drift caused by occlusion and interaction among targets. The visibility map of the target is learned and used for inferring the spatial attention map. The spatial attention map is then applied to weight the features. Besides, the occlusion status can be estimated from the visibility map, which controls the online updating process via weighted loss on training samples with different occlusion statuses in different frames. It can be considered as temporal attention mechanism. The proposed algorithm achieves 34.3% and 46.0% in MOTA on challenging MOT15 and MOT16 benchmark dataset respectively. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Recently deep neural networks have been widely employed to deal with the visual tracking problem. In this work, we present a new deep architecture which incorporates the temporal and spatial information to boost the tracking performance. Our deep architecture contains three networks, a Feature Net, a Temporal Net, and a Spatial Net. The Feature Net extracts general feature representations of the target. With these feature representations, the Temporal Net encodes the trajectory of the target and directly learns temporal correspondences to estimate the object state from a global perspective. Based on the learning results of the Temporal Net, the Spatial Net further refines the object tracking state using local spatial object information. Extensive experiments on four of the largest tracking benchmarks, including VOT2014, VOT2016, OTB50, and OTB100, demonstrate competing performance of the proposed tracker over a number of state-of-the-art algorithms. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Most of the existing tracking methods based on CNN(convolutional neural networks) are too slow for real-time application despite the excellent tracking precision compared with the traditional ones. In this paper, a fast dynamic visual tracking algorithm combining CNN based MDNet(Multi-Domain Network) and RoIAlign was developed. The major problem of MDNet also lies in the time efficiency. Considering the computational complexity of MDNet is mainly caused by the large amount of convolution operations and fine-tuning of the network during tracking, a RoIPool layer which could conduct the convolution over the whole image instead of each RoI is added to accelerate the convolution and a new strategy of fine-tuning the fully-connected layers is used to accelerate the update. With RoIPool employed, the computation speed has been increased but the tracking precision has dropped simultaneously. RoIPool could lose some positioning precision because it can not handle locations represented by floating numbers. So RoIAlign, instead of RoIPool, which can process floating numbers of locations by bilinear interpolation has been added to the network. The results show the target localization precision has been improved and it hardly increases the computational cost. These strategies can accelerate the processing and make it 7x faster than MDNet with very low impact on precision and it can run at around 7 fps. The proposed algorithm has been evaluated on two benchmarks: OTB100 and VOT2016, on which high precision and speed have been obtained. The influence of the network structure and training data are also discussed with experiments. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Multi-People Tracking in an open-world setting requires a special effort in precise detection. Moreover, temporal continuity in the detection phase gains more importance when scene cluttering introduces the challenging problems of occluded targets. For the purpose, we propose a deep network architecture that jointly extracts people body parts and associates them across short temporal spans. Our model explicitly deals with occluded body parts, by hallucinating plausible solutions of not visible joints. We propose a new end-to-end architecture composed by four branches (visible heatmaps, occluded heatmaps, part affinity fields and temporal affinity fields) fed by a time linker feature extractor. To overcome the lack of surveillance data with tracking, body part and occlusion annotations we created the vastest Computer Graphics dataset for people tracking in urban scenarios by exploiting a photorealistic videogame. It is up to now the vastest dataset (about 500.000 frames, almost 10 million body poses) of human body parts for people tracking in urban scenarios. Our architecture trained on virtual data exhibits good generalization capabilities also on public real tracking benchmarks, when image resolution and sharpness are high enough, producing reliable tracklets useful for further batch data association or re-id modules. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> In the field of generic object tracking numerous attempts have been made to exploit deep features. Despite all expectations, deep trackers are yet to reach an outstanding level of performance compared to methods solely based on handcrafted features. In this paper, we investigate this key issue and propose an approach to unlock the true potential of deep features for tracking. We systematically study the characteristics of both deep and shallow features, and their relation to tracking accuracy and robustness. We identify the limited data and low spatial resolution as the main challenges, and propose strategies to counter these issues when integrating deep features for tracking. Furthermore, we propose a novel adaptive fusion approach that leverages the complementary properties of deep and shallow features to improve both robustness and accuracy. Extensive experiments are performed on four challenging datasets. On VOT2017, our approach significantly outperforms the top performing tracker from the challenge with a relative gain of 17% in EAO. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video taken from several cameras. Person Re-Identification (Re-ID) retrieves from a gallery images of people similar to a person query image. We learn good features for both MTMCT and Re-ID with a convolutional neural network. Our contributions include an adaptive weighted triplet loss for training and a new technique for hard-identity mining. Our method outperforms the state of the art both on the DukeMTMC benchmarks for tracking, and on the Market-1501 and DukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good Re-ID and good MTMCT scores, and perform ablation studies to elucidate the contributions of the main components of our system. Code is available1. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact. Self-driving vehicles (SDVs) are expected to prevent road accidents and save millions of lives while improving the livelihood and life quality of many more. However, despite large interest and a number of industry players working in the autonomous domain, there still remains more to be done in order to develop a system capable of operating at a level comparable to best human drivers. One reason for this is high uncertainty of traffic behavior and large number of situations that an SDV may encounter on the roads, making it very difficult to create a fully generalizable system. To ensure safe and efficient operations, an autonomous vehicle is required to account for this uncertainty and to anticipate a multitude of possible behaviors of traffic actors in its surrounding. We address this critical problem and present a method to predict multiple possible trajectories of actors while also estimating their probabilities. The method encodes each actor’s surrounding context into a raster image, used as input by deep convolutional networks to automatically derive relevant features for the task. Following extensive offline evaluation and comparison to state-of-the-art baselines, the method was successfully tested on SDVs in closed-course tests. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> High-Level Features, Occlusion Handling and Feature Fusion <s> Recent progresses in model-free single object tracking (SOT) algorithms have largely inspired applying SOT to multi-object tracking (MOT) to improve the robustness as well as relieving dependency on external detector. However, SOT algorithms are generally designed for distinguishing a target from its environment, and hence meet problems when a target is spatially mixed with similar objects as observed frequently in MOT. To address this issue, in this paper we propose an instance-aware tracker to integrate SOT techniques for MOT by encoding awareness both within and between target models. In particular, we construct each target model by fusing information for distinguishing target both from background and other instances (tracking targets). To conserve uniqueness of all target models, our instance-aware tracker considers response maps from all target models and assigns spatial locations exclusively to optimize the overall accuracy. Another contribution we make is a dynamic model refreshing strategy learned by a convolutional neural network. This strategy helps to eliminate initialization noise as well as to adapt to variation of target size and appearance. To show the effectiveness of the proposed approach, it is evaluated on the popular MOT15 and MOT16 challenge benchmarks. On both benchmarks, our approach achieves the best overall performances in comparison with published results. <s> BIB010
Appearance models offer high-level features which are also used to account for occlusion in much simpler and efficient systems, such as in BIB002 , where computed appearance descriptors form an appearance space. With properly-determined metrics, observations having a similar appearance are identified using a nearest-neighbor-based approach. Switching from image-space to an appearance space seems to substantially account for occlusions, reducing their negative impact at a negligible cost in terms of performance. A possible alternative to appearance-based classification is the use of template-based metrics. Such an approach uses a reference region of interest (ROI) drawn from one or multiple frames and attempts to match it in subsequent frames using an appropriately-constructed metric. Template-based methods often work for partial detections, thereby accounting for occlusion and/or noise, considering that the template needs not be perfectly or completely matched for a successful detection to occur. An example of a template-based method is provided by , which involves three CNNs, one for template generation, one dedicated to region searching and one for handling background Figure 2 : A CNN-based model that uses ROI-pooling and shared features for target classification BIB003 areas. The method is somewhat similar to what could be achieved by a generative adversarial network (GAN), since the "searcher" network attempts to fit multiple subimages within the positive detections provided by the template component while simultaneously attempting to maximize the distance to the negative background component. The candidate subimages generated by the three components are fed through a loss function which is designed to favor candidates which are closer to template regions than to background ones. While performance-wise such a approach is claimed to provide impressive framerates, care should be taken when using template or reference-based methods. These are generally suited for situations where there is no significant variation in the overall tone of the frames. Such methods have a much higher failure rate when, for instance, the lighting conditions change during tracking, such as when the tracked object moves from a brightly-lit to a shaded area. An improvement on the use of appearance and shared tracking information is provided by BIB003 in the form of a CNN-based single object tracker which generates and adapts the appearance models for multi-frame detection ( Figure 2 ). The use of pooling layers and shared features accounts for drift effects caused by occlusion and inter-object dependency, as part of a spatial and temporal attention mechanism which is responsible for dynamically discriminating between training candidates based on the level of occlusion. As such, training samples are weighted based on their occlusion status, which optimizes the training process both in terms of the resulting classification accuracy, and performance. Generally speaking, pooling operations have two important effects: on the one hand, the subimage of the feature map being analyzed is increased, since a pooled feature map contains information from a larger area of the originating image; on the other hand, the reduced size of a pooled map means fewer computational resources are required to process it which positively impacts performance. The major downside of pooling is that spatial positioning is further diluted with each additional layer. Multiple related papers involve the so called "ROI pooling", which commonly refers to a pooling operation being applied to the bounding box of an identified object in hope that the reduced representation will gain robustness to noise and variations of the object's geometry across multiple frames. ROI pooling is successfully used by BIB005 to improve the performance of their CNN-based classifier. The authors observe that positioning cues are adversely affected by pooling, to which a potential solution is to reposition the mis-aligned ROIs via bilinear interpolation. This reinterpretation of pooling in referred to as "ROI align". The gain in performance is significant, while the authors demonstrate that the positioning of the ROIs is stabilized. Tracking stabilization is fundamental in automotive application, where effects such as jittering, camera shaking and spatial/temporal noise commonly occur. In terms of ensuring ROI stability and accuracy, occlusion plays an important role. Some authors handle this topic extensively, such as BIB006 which proposes a deep neural network for tracking occluded body parts, by processing features extracted from a VGG19 network. Some authors use different interpretations of the feature concept, adapted to the specifics of autonomous driving. BIB009 create custom feature maps by encoding various properties of the detections (bounding boxes, positions, velocities, accelerations etc.) in raster images. These images are sent though a CNN which generates raster features that the authors demonstrate to provide more reliable correlations and more accurate trajectories than using features derived directly from raw data. The idea of tracking robustness and stability is sometimes solvable using image and object fusion. The related methods are referred to as being "instance-aware", meaning that a targeted object is matched across the image space and across multiple frames by fusing identified objects with similar characteristics. BIB010 proposes a fusion-based method that uses single-object tracking to identify multiple candidate instances and subsequently builds target models for potential objects by fusing information from detections and background cues. The models are updated using a CNN, which ensures robustness to noise, scaling and minor variations of the targets' appearance. As with many other related approaches, an online implementation offloads most of the processing to an external server leaving the embedded device from the vehicle to carry out only minor and frequently-needed tasks. Since quick reactions of the system are crucial for proper and safe vehicle operation, performance and a rapid response of the underlying software is essential, which is why the online approach is popular in this field. Also in the context of ensuring robustness and stability, some authors apply fusion techniques to information extracted from CNN layers. It has been previously mentioned that important correlations can be drawn from deep and shallow layers which can be exploited together for identifying robust features in the data. This principle is used for instance in BIB007 , where, in order to ensure robustness and performance, various features extracted from layers in different parts of a CNN are fused to form stronger characteristics which are affected to a lesser degree by noise, spatial variation and perturbations in the acquired images. The identified relationships between CNN layers are exploited in order to account for lost spatial information which occurs in deeper layers. The method is claimed to have improved accuracy over the state-of-the-art of the time, which is consistent with the idea of ensuring robustness and low failure rates. Deeper features are more consistent and allow for stronger classification, while shallow features compensate for the detrimental effects of filtering and pooling, where relative positioning information may be lost. This allows for deep features to be better integrated into the spatial context of the images. On a similar note, in BIB001 features from multiple layers which individually constitute weak trackers are combined to form a stronger one, by means of a hedging algorithm. The practice of using multiple weak methods into a more effective one has significant potential and is based on the principle that each individual weak component contains some piece of meaningful information on the tracked object, while also having useless data mostly found in the form of noise. By appropriately combining the contributions of each weak component, a stronger one can be generated. As such, methods that exploit compound classifiers typically show robustness to variances of illumination, affine transforms, camera shaking etc. The downside of such methods comes from the need to compute multiple groups of weak features, which causes penalties in realtime response, while the fusion algorithm comes with an additional overhead in terms of impacting performance. Alternative approaches exist which mitigate this to some extent, such as the use of multiple sensors which directly provide data, as opposed to relying on multiple features computed from the same camera or pair of cameras. An example in this direction is provided in BIB008 , where an image gallery from a multi-camera system is fed into a CNN in an attempt to solve multi-target multi-camera tracking and target re-identification problems. For correct and consistent re-identification, an observation in a specific image is matched against several ones from other cameras using correlations as part of a similarity metric. Such correlation among images from multiple cameras are learned during training and subsequently clustered to provide a unified agreement between them. Eventually, after a training process that exploits a custom triplet loss function, features are obtained to be further used in the identification process. In terms of performance, the method boasts substantial accuracy considering the multi-camera setup. The idea of compositing robust features from a multi-faceted architecture is further exploited in works such as BIB004 , where a triple-net setup is used to generate features that account for appearance, spatial cues and temporal consistency.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we study a discriminatively trained deep convolutional network for the task of visual tracking. Our tracker utilizes both motion and appearance features that are extracted from a pre-trained dual stream deep convolution network. We show that the features extracted from our dual-stream network can provide rich information about the target and this leads to competitive performance against state of the art tracking methods on a visual tracking benchmark. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Recent approaches for high accuracy detection and tracking of object categories in video consist of complex multistage solutions that become more cumbersome each year. In this paper we propose a ConvNet architecture that jointly performs detection and tracking, solving the task in a simple and effective way. Our contributions are threefold: (i) we set up a ConvNet architecture for simultaneous detection and tracking, using a multi-task objective for frame-based object detection and across-frame track regression; (ii) we introduce correlation features that represent object co-occurrences across time to aid the ConvNet during tracking; and (iii) we link the frame level detections based on our across-frame tracklets to produce high accuracy detections at the video level. Our ConvNet architecture for spatiotemporal object detection is evaluated on the large-scale ImageNet VID dataset where it achieves state-of-the-art results. Our approach provides better single model performance than the winning method of the last ImageNet challenge while being conceptually much simpler. Finally, we show that by increasing the temporal stride we can dramatically increase the tracker speed. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Convolutional Neural Network (CNN) based methods have shown significant performance gains in the problem of visual tracking in recent years. Due to many uncertain changes of objects online, such as abrupt motion, background clutter and large deformation, the visual tracking is still a challenging task. We propose a novel algorithm, namely Deep Location-Specific Tracking, which decomposes the tracking problem into a localization task and a classification task, and trains an individual network for each task. The localization network exploits the information in the current frame and provides a specific location to improve the probability of successful tracking, while the classification network finds the target among many examples generated around the target location in the previous frame, as well as the one estimated from the localization network in the current frame. CNN based trackers often have massive number of trainable parameters, and are prone to over-fitting to some particular object states, leading to less precision or tracking drift. We address this problem by learning a classification network based on 1 × 1 convolution and global average pooling. Extensive experimental results on popular benchmark datasets show that the proposed tracker achieves competitive results without using additional tracking videos for fine-tuning. The code is available at https://github.com/ZjjConan/DLST <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Data association problems are an important component of many computer vision applications, with multi-object tracking being one of the most prominent examples. A typical approach to data association involves finding a graph matching or network flow that minimizes a sum of pairwise association costs, which are often either hand-crafted or learned as linear functions of fixed features. In this work, we demonstrate that it is possible to learn features for network-flow-based data association via backpropagation, by expressing the optimum of a smoothed network flow problem as a differentiable function of the pairwise association costs. We apply this approach to multi-object tracking with a network flow formulation. Our experiments demonstrate that we are able to successfully learn all cost functions for the association problem in an end-to-end fashion, which outperform hand-crafted costs in all settings. The integration and combination of various sources of inputs becomes easy and the cost functions can be learned entirely from data, alleviating tedious hand-designing of costs. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we propose a CNN-based framework for online MOT. This framework utilizes the merits of single object trackers in adapting appearance models and searching for target in the next frame. Simply applying single object tracker for MOT will encounter the problem in computational efficiency and drifted results caused by occlusion. Our framework achieves computational efficiency by sharing features and using ROI-Pooling to obtain individual features for each target. Some online learned target-specific CNN layers are used for adapting the appearance model for each target. In the framework, we introduce spatial-temporal attention mechanism (STAM) to handle the drift caused by occlusion and interaction among targets. The visibility map of the target is learned and used for inferring the spatial attention map. The spatial attention map is then applied to weight the features. Besides, the occlusion status can be estimated from the visibility map, which controls the online updating process via weighted loss on training samples with different occlusion statuses in different frames. It can be considered as temporal attention mechanism. The proposed algorithm achieves 34.3% and 46.0% in MOTA on challenging MOT15 and MOT16 benchmark dataset respectively. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we propose the methods to handle temporal errors during multi-object tracking. Temporal error occurs when objects are occluded or noisy detections appear near the object. In those situations, tracking may fail and various errors like drift or ID-switching occur. It is hard to overcome temporal errors only by using motion and shape information. So, we propose the historical appearance matching method and joint-input siamese network which was trained by 2-step process. It can prevent tracking failures although objects are temporally occluded or last matching information is unreliable. We also provide useful technique to remove noisy detections effectively according to scene condition. Tracking performance, especially identity consistency, is highly improved by attaching our methods. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Multiple Object Tracking (MOT) plays an important role in solving many fundamental problems in video analysis and computer vision. Most MOT methods employ two steps: Object Detection and Data Association. The first step detects objects of interest in every frame of a video, and the second establishes correspondence between the detected objects in different frames to obtain their tracks. Object detection has made tremendous progress in the last few years due to deep learning. However, data association for tracking still relies on hand crafted constraints such as appearance, motion, spatial proximity, grouping etc. to compute affinities between the objects in different frames. In this paper, we harness the power of deep learning for data association in tracking by jointly modeling object appearances and their affinities between different frames in an end-to-end fashion. The proposed Deep Affinity Network (DAN) learns compact, yet comprehensive features of pre-detected objects at several levels of abstraction, and performs exhaustive pairing permutations of those features in any two frames to infer object affinities. DAN also accounts for multiple objects appearing and disappearing between video frames. We exploit the resulting efficient affinity computations to associate objects in the current frame deep into the previous frames for reliable on-line tracking. Our technique is evaluated on popular multiple object tracking challenges MOT15, MOT17 and UA-DETRAC. Comprehensive benchmarking under twelve evaluation metrics demonstrates that our approach is among the best performing techniques on the leader board for these challenges. The open source implementation of our work is available at https://github.com/shijieS/SST.git. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> This paper proposes a novel model, named Continuity-Discrimination Convolutional Neural Network (CD-CNN), for visual object tracking. Existing state-of-the-art tracking methods do not deal with temporal relationship in video sequences, which leads to imperfect feature representations. To address this problem, CD-CNN models temporal appearance continuity based on the idea of temporal slowness. Mathematically, we prove that, by introducing temporal appearance continuity into tracking, the upper bound of target appearance representation error can be sufficiently small with high probability. Further, in order to alleviate inaccurate target localization and drifting, we propose a novel notion, object-centroid, to characterize not only objectness but also the relative position of the target within a given patch. Both temporal appearance continuity and object-centroid are jointly learned during offline training and then transferred for online tracking. We evaluate our tracker through extensive experiments on two challenging benchmarks and show its competitive tracking performance compared with state-of-the-art trackers. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> Visual attention, derived from cognitive neuroscience, facilitates human perception on the most pertinent subset of the sensory data. Recently, significant efforts have been made to exploit attention schemes to advance computer vision systems. For visual tracking, it is often challenging to track target objects undergoing large appearance changes. Attention maps facilitate visual tracking by selectively paying attention to temporal robust features. Existing tracking-by-detection approaches mainly use additional attention modules to generate feature weights as the classifiers are not equipped with such mechanisms. In this paper, we propose a reciprocative learning algorithm to exploit visual attention for training deep classifiers. The proposed algorithm consists of feed-forward and backward operations to generate attention maps, which serve as regularization terms coupled with the original classification loss function for training. The deep classifier learns to attend to the regions of target objects robust to appearance changes. Extensive experiments on large-scale benchmark datasets show that the proposed attentive tracking method performs favorably against the state-of-the-art approaches. <s> BIB010 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we propose a unified Multi-Object Tracking (MOT) framework learning to make full use of long term and short term cues for handling complex cases in MOT scenes. Besides, for better association, we propose switcher-aware classification (SAC), which takes the potential identity-switch causer (switcher) into consideration. Specifically, the proposed framework includes a Single Object Tracking (SOT) sub-net to capture short term cues, a re-identification (ReID) sub-net to extract long term cues and a switcher-aware classifier to make matching decisions using extracted features from the main target and the switcher. Short term cues help to find false negatives, while long term cues avoid critical mistakes when occlusion happens, and the SAC learns to combine multiple cues in an effective way and improves robustness. The method is evaluated on the challenging MOT benchmarks and achieves the state-of-the-art results. <s> BIB011 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Ensuring Temporal Coherence <s> In this paper, we propose an online Multi-Object Tracking (MOT) approach which integrates the merits of single object tracking and data association methods in a unified framework to handle noisy detections and frequent interactions between targets. Specifically, for applying single object tracking in MOT, we introduce a cost-sensitive tracking loss based on the state-of-the-art visual tracker, which encourages the model to focus on hard negative distractors during online learning. For data association, we propose Dual Matching Attention Networks (DMAN) with both spatial and temporal attention mechanisms. The spatial attention module generates dual attention maps which enable the network to focus on the matching patterns of the input image pair, while the temporal attention module adaptively allocates different levels of attention to different samples in the tracklet to suppress noisy observations. Experimental results on the MOT benchmark datasets show that the proposed algorithm performs favorably against both online and offline trackers in terms of identity-preserving metrics. <s> BIB012
One of the most significant challenges for autonomous driving is accounting for temporal coherence in tracking. Since most if not all automotive scenarios involve video and motion across multiple frames, handling image sequence data and accounting for temporal consistency are key factors in ensuring successful predictions, accuracy and the reliability of the systems involved. Essentially, solving temporal tracking is a compound problem and involves, on the one hand, tracking objects in single images considering all the problems induced by noise, geometry and the lack of spatial information and, on the other hand, making sure that the tracking is consistent across multiple frames, that is, assigning correct IDs to the same objects in a continuous video sequence. This presents a lot of challenges, for instance when objects become occluded in some frames and are exposed in others. In other cases, the tracked objects suffer affine transformations across frames, of which rotation and shearing are notoriously difficult to handle. Additionally, the objects may change shape due to noise, aliasing and other acquisition-related artifacts that may be present in the images, since video is rarely if ever acquired at "high enough" resolution and is in many cases in some lossy compressed format. As such, the challenge is to identify features that are robust enough to handle proper classification and to ensure temporal consistency considering all pitfalls associated with processing video data. This often involves a "focus and context" approach, where key targets are identified in images not only by the features that they exhibit in that particular image, but by also ensuring that the feature extraction method also accounts for the information provided by the context which the tracked object finds itself in. In other words, processing a key frame in a video sequence, which provides the focus, should account for the context information that has been drawn up from previous frames. Where supervised algorithms are concerned, one popular approach is to integrate recurrent components into the classifier, which inherently account for the context provided by a set of elements from a sequence. Recurrent neural networks (RNN) and, more specifically, long short-term memory (LSTM) layers are frequently present in the related literature where temporal data is concerned. When training and exploiting RNN layers to classify sequences, the results from one frame carry over to the computations that take place for subsequent frames. As such, when processing the current frame, resulting detections also account for what was found in previous frames. For automotive applications, one advantage of neural networks is that they can be trained off-site, while the resulting model can be ported to the embedded device in the vehicle where predictions and tracking can occur at usable speeds. While training a recurrent network or multiple collaborating networks can take a long time, forward-propagating new data can happen quite fast, making these algorithms a realistic choice for real-time tracking. LSTMs are however not the "magic" solution, nor the de facto method for handling sequence data, since many authors have successfully achieved high accuracy results using only CNNs. Additionally, many authors have found it helpful to use dual neural networks in conjunction, where one network processes spatial information while the other handles temporal consistency and motion. Other methods employ siamese networks, i.e. identical classifiers trained differently which identify different features using similar processing. One example of a dual-streaming network is in where appearance and motion are handled by a combination of CNNs which work together within a unified framework. The motion component uses spotlight filtering over feature maps which result from subtracting features drawn from dual CNNs and generates a space-invariant feature map using pooling and fusion operations. The other component handles appearance by filtering and fusing features from a different arrangement of convolutional layers. Data from ROIs in the acquired images is passed on to both components and motion responses from one component are correlated with appearance responses from the other. Both components produce feature maps which are composed together to form space-and motion-invariant characteristics to be further used for target identification. Another concept which consistently appears in the related literature is "historical matching" where attempts are made to carry over part of the characteristics of tracked objects across multiple frames, by building an affinity model from shape, appearance, positional and motion cues. This is achieved in BIB007 using dual CNNs with multistep training, which handle appearance matching using various filtering operations and linearly composing the resulting features across multiple timestamps. The notion of determining and preserving affinity is also exploited in BIB008 where data consisting of frame pairs several timestamps apart are fed into dual VGG networks. The resulting features are permuted Figure 3 : A dual CNN detector that extracts and correlates features from frame pairs BIB002 and incorporated into association matrices which are further used to compute object affinities. This approach has the benefit of partially accounting for occlusion using only a limited number of frames, since the affinity of an object which is partially occluded in one frame may be preserved if it appears fully in the pair frame. Ensuring the continuity of high-level features such as appearance models is not a trivial task, and multiple solutions exist. For example BIB009 uses a CNN modified with a discriminative component intended to correct for temporal errors that may accumulate in the appearance of tracked objects across multiple frames. Discriminative network behavior is also exploited in BIB001 where selectively trained dual networks are used to generate and correlate appearance with a motion stream. Also, decomposing the tracking problem into localization and motion using multiple component networks is a frequently-encountered solution, further exploited in works such as BIB003 , BIB002 . As such, using two networks that work in tandem is a popular approach and seems to provide accurate results throughout the available literature ( Figure 3 ). Some authors take this concept further by employing several such networks BIB004 , each of which contributes features exhibiting specific and limited correlations, which, when joined together, from a complete appearance model of the tracked objects. Other approaches map network components to flow graphs, the traversal of which enables optimal cost-function and feature learning BIB005 . It is worthy of noting that the more complicated the architecture of the classifier, the more elaborate the training process and the poorer the performance. A careful balance should therefore be reached between the complexity of the classifier, the completeness of the resulting features and the amount of processing and training data needed to produce high-accuracy results at a cost in computational resources which is consistent with the needs of automotive applications. In BIB011 , the idea of object matching from frame pairs is further explored using a three-component setup: a siamese network configuration handles single object tracking and generates short-term cues in the form of tracklet images, while a modified version of GoogLeNet generates re-identification features from multiple tracklets. The third component is based on the idea that there may be a large overlap in the previously-computed features, which are consequently treated as switcher candidates. As a result, a switcher-aware logic handles the situation where IDs of different objects may be interchanged during frame sequences mainly as a result of partial occlusion. It is worth mentioning that the tendency in ensuring accurate tracking is to come up with inventive features which express increasingly-abstract concepts. It has been demonstrated throughout the related literature that, in general, the more abstract the feature, the more reliable it is long term. Therefore, a lot of effort is directed toward identifying object features that are not necessarily direct indicators of shape, position and/or geometry, but are rather higher-level, more abstract representations of how the object fits within the overall context of the acquired video sequence. Examples of such concept are the previously-mentioned "affinity"; another is "attention", where some authors propose neural-network-based solutions for estimating attention and generating attention maps. BIB006 computes attention features which are spatially and temporally sound using an arrangement of ROI identification and pooling operations. BIB012 uses attention cues to handle the inherent noise from conventional detection methods, as well as to compensate for frequent interactions and overlaps among tracked targets. A two-component system handles noise and occlusion and produces spatial attention maps by matching similar regions from pair frames, while temporal coherence is achieved by weighing observations across the trajectory differently, thereby assigning them different levels of attention, which generates filtering criteria used to successfully account for similar observations while eliminating dissimilar ones. Another noteworthy contribution is BIB010 , where attention maps are generated using reciprocative learning, where the input frame is sent back-and-forth through several convolutional layers: in the forward propagation phase classification scores are generated, while the back-propagation produces attention maps from the gradients of the previously-obtained scores. The computed maps are further used as regularization terms within a classifier. The advantage of this approach is its simplicity compared to other similar ones. The authors claim that their method for generating attention features ensures long-term robustness, which is advantageous considering that other methods that use frame pairs and no recurrent components do not seem to work as well for very long-term sequences.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> This paper presents to the best of our knowledge the first end-to-end object tracking approach which directly maps from raw sensor input to object tracks in sensor space without requiring any feature engineering or system identification in the form of plant or sensor models. Specifically, our system accepts a stream of raw sensor data at one end and, in real-time, produces an estimate of the entire environment state at the output including even occluded objects. We achieve this by framing the problem as a deep learning task and exploit sequence models in the form of recurrent neural networks to learn a mapping from sensor measurements to object tracks. In particular, we propose a learning method based on a form of input dropout which allows learning in an unsupervised manner, only based on raw, occluded sensor data without access to ground-truth annotations. We demonstrate our approach using a synthetic dataset designed to mimic the task of tracking objects in 2D laser data -- as commonly encountered in robotics applications -- and show that it learns to track many dynamic objects despite occlusions and the presence of sensor noise. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> The majority of existing solutions to the Multi-Target Tracking (MTT) problem do not combine cues over a long period of time in a coherent fashion. In this paper, we present an online method that encodes long-term temporal dependencies across multiple cues. One key challenge of tracking methods is to accurately track occluded targets or those which share similar appearance properties with surrounding objects. To address this challenge, we present a structure of Recurrent Neural Networks (RNN) that jointly reasons on multiple cues over a temporal window. Our method allows to correct data association errors and recover observations from occluded states. We demonstrate the robustness of our data-driven approach by tracking multiple targets using their appearance, motion, and even interactions. Our method outperforms previous works on multiple publicly available datasets including the challenging MOT benchmark. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> In this paper, we propose an efficient vehicle trajectory prediction framework based on recurrent neural network. Basically, the characteristic of the vehicle's trajectory is different from that of regular moving objects since it is affected by various latent factors including road structure, traffic rules, and driver's intention. Previous state of the art approaches use sophisticated vehicle behavior model describing these factors and derive the complex trajectory prediction algorithm, which requires a system designer to conduct intensive model optimization for practical use. Our approach is data-driven and simple to use in that it learns complex behavior of the vehicles from the massive amount of trajectory data through deep neural network model. The proposed trajectory prediction method employs the recurrent neural network called long short-term memory (LSTM) to analyze the temporal behavior and predict the future coordinate of the surrounding vehicles. The proposed scheme feeds the sequence of vehicles' coordinates obtained from sensor measurements to the LSTM and produces the probabilistic information on the future location of the vehicles over occupancy grid map. The experiments conducted using the data collected from highway driving show that the proposed method can produce reasonably good estimate of future trajectory. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> To safely and efficiently navigate through complex traffic scenarios, autonomous vehicles need to have the ability to predict the future motion of surrounding vehicles. Multiple interacting agents, the multi-modal nature of driver behavior, and the inherent uncertainty involved in the task make motion prediction of surrounding vehicles a challenging problem. In this paper, we present an LSTM model for interaction aware motion prediction of surrounding vehicles on freeways. Our model assigns confidence values to maneuvers being performed by vehicles and outputs a multi-modal distribution over future motion based on them. We compare our approach with the prior art for vehicle motion prediction on the publicly available NGSIM US-101 and I-80 datasets. Our results show an improvement in terms of RMS values of prediction error. We also present an ablative analysis of the components of our proposed model and analyze the predictions made by the model in complex traffic scenarios. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> Multi-Object Tracking (MOT) is a challenging task in the complex scene such as surveillance and autonomous driving. In this paper, we propose a novel tracklet processing method to cleave and re-connect tracklets on crowd or long-term occlusion by Siamese Bi-Gated Recurrent Unit (GRU). The tracklet generation utilizes object features extracted by CNN and RNN to create the high-confidence tracklet candidates in sparse scenario. Due to mis-tracking in the generation process, the tracklets from different objects are split into several sub-tracklets by a bidirectional GRU. After that, a Siamese GRU based tracklet re-connection method is applied to link the sub-tracklets which belong to the same object to form a whole trajectory. In addition, we extract the tracklet images from existing MOT datasets and propose a novel dataset to train our networks. The proposed dataset contains more than 95160 pedestrian images. It has 793 different persons in it. On average, there are 120 images for each person with positions and sizes. Experimental results demonstrate the advantages of our model over the state-of-the-art methods on MOT16. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> In recent deep online and near-online multi-object tracking approaches, a difficulty has been to incorporate long-term appearance models to efficiently score object tracks under severe occlusion and multiple missing detections. In this paper, we propose a novel recurrent network model, the Bilinear LSTM, in order to improve the learning of long-term appearance models via a recurrent network. Based on intuitions drawn from recursive least squares, Bilinear LSTM stores building blocks of a linear predictor in its memory, which is then coupled with the input in a multiplicative manner, instead of the additive coupling in conventional LSTM approaches. Such coupling resembles an online learned classifier/regressor at each time step, which we have found to improve performances in using LSTM for appearance modeling. We also propose novel data augmentation approaches to efficiently train recurrent models that score object tracks on both appearance and motion. We train an LSTM that can score object tracks based on both appearance and motion and utilize it in a multiple hypothesis tracking framework. In experiments, we show that with our novel LSTM model, we achieved state-of-the-art performance on near-online multiple object tracking on the MOT 2016 and MOT 2017 benchmarks. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> LSTM-Based Methods <s> This paper presents a novel approach for tracking static and dynamic objects for an autonomous vehicle operating in complex urban environments. Whereas traditional approaches for tracking often feature numerous hand-engineered stages, this method is learned end-to-end and can directly predict a fully unoccluded occupancy grid from raw laser input. We employ a recurrent neural network to capture the state and evolution of the environment, and train the model in an entirely unsupervised manner. In doing so, our use case compares to model-free, multi-object tracking although we do not explicitly perform the underlying data-association process. Further, we demonstrate that the underlying representation learned for the tracking task can be leveraged via inductive transfer to train an object detector in a data efficient manner. We motivate a number of architectural features and show the positive contribution of dilated convolutions, dynamic and static memory units to the task of tracking and classifying complex... <s> BIB007
Generally, methods that are based on non-recurrent CNN-only approaches are best suited to handle short scenes where quick reactions are required in a brief situation that can be captured in a limited number of frames. Various literature studies show that LSTM-based methods have more potential to ensure the proper handling of long-term dependencies while avoiding various mathematical pitfalls such as network parameters that end up having extremely small values because of repeated divisions (e.g. the "vanishing gradient" problem) which in practice manifests as a mis-trained network resulting in drift effects and false positives. Handling long-term dependencies means having to deal with occlusions to a greater extent than in shorter term scenarios. Most approaches combine various classifiers which handle spatial and shape-based classification with LSTM components which account for temporal coherence. An early example of an RNN implementation is which uses an LSTM-based classifier to track objects in time, across multiple frames ( Figure 4 ). The authors demonstrate that an LSTM-based approach is better suited to removing and reinserting candidate observations to account for objects that leave/reenter the visible area of the scene. This provides a solution to the track initiation and termination problem based on data associations found in features obtained from the LSTM layers. This concept is exploited further by BIB002 where various cues are determined to assess long-term dependencies using a dual LSTM network. One LSTM component tracks motion, while the other handles interactions, and the two are combined to compute similarity scores between frames. The results show that using recurrent components to lengthy sequences produces more reliable results than other methods which are based on frame pairs. Some implementations using LSTM focus on tracking-while-driving problems, which pose additional challenges compared to most established benchmarks which use static cameras. As an alternative to most related approaches which attempt to create models of vehicle behavior, BIB003 circumvent the need for vehicle modeling by directly inputting sensor measurements into an LSTM network to predict future vehicle positions and to analyze temporal behavior. A more elaborate attempt is BIB004 where instead of raw sensor data, the authors establish several maneuver classes and feed maneuver sequences to LSTM layers in order to generate probabilities for the occurrence of future maneuver instances. Eventually, multiple such maneuvers can be used to construct the trajectory and/or anticipate the intentions of the vehicles. Furthermore, increasing the length of the sequence increases accuracy and stability over time, up to a certain limit where the network saturates and no longer improves. A solution to this problem would be to split the features into multiple sub-features, followed by reconnecting them to form more coherent long-term trajectories. This is achieved in BIB005 where a combined CNN and RNNbased feature extractor generates tracklets over lengthy sequences. The tracklets are split on frames which contain occlusions, while a recombination mechanism based on gated recurrent units (GRUs) recombines the tracklet pieces according to their similarities, followed by the reconstruction of the complete trajectory using polynomial curve fitting. Some authors do further modifications to LSTM layers to produce classifiers that generate abstract high-level features such as those found in appearance models. A good example in this sense is BIB006 where LSTM layers are modified to do multiplication operations and use customized gating schemes between the recurrent hidden state and the derived features. The newly-obtained LSTM layers are Figure 4 : An LSTM-based architecture used for temporal prediction better at producing appearance-related features than conventional LSTMs which excel at motion prediction. Where trajectory estimation is concerned, LSTM-based methods exploit the gating that takes place in the recurrent layers, as opposed to regular RNNs which pass candidate features into the next recurrent iteration without discriminating between them. The filters inherently present in gated LSTMs have the potential to eliminate unwanted feature candidates which, in actual use cases, may represent unwanted trajectory paths, while maintaining candidates which will eventually lead to correctly-estimated motion cues. Furthermore, LSTMs demonstrate an inherent capability to predict trajectories that are interrupted by occlusion events or by reduced acquisition capabilities. This idea is exploited in order to find solutions to the problem of estimating the layout of a full environment from limited sensor data, a concept referred to in the related literature as "seeing beyond seeing" BIB001 . Given a set of sensors with limited capability, the idea is to perform end-to-end tracking using raw sensor data without the need to explicitly identify high-level features or to have a preexisting detailed model of the environment. In this sense, recurrent architectures have the potential to predict and reconstruct occluded parts of a particular scene from incomplete or partial raw sensor output. The network is trained with partial data and it is updated through a mapping mechanism that makes associations with an unoccluded scene. Subsequently, the recurrent layers make their own internal associations and become capable of filling in the missing gaps that the sensors have been unable to acquire. Specifically, given a hidden state of the world which is not directly captured by any sensor, an RNN is trained using sequences of partial observations in an attempt to update its belief concerning the hidden parts of the world. The resulting information is used to "unocclude" the scene which was initially only partially perceived through limited sensor data. Upon training, the network is capable of defining its own interpretation of the hidden state of the scene. The previouslymentioned result is elaborated upon by a group which includes the same authors BIB007 . A similar approach previously applied in basic robot guidance is extended for use in assisted driving. In this case more complex information can be inferred from raw sensor input, in the form of occupancy maps, which together with a deep network-based architecture allow for predicting the probabilities of obstacle presence even in occluded portions within the field of view.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ... <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0% relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Convolutional neural network (CNN) has drawn increasing interest in visual tracking owing to its powerfulness in feature extraction. Most existing CNN-based trackers treat tracking as a classification problem. However, these trackers are sensitive to similar distractors because their CNN models mainly focus on inter-class classification. To address this problem, we use self-structure information of object to distinguish it from distractors. Specifically, we utilize recurrent neural network (RNN) to model object structure, and incorporate it into CNN to improve its robustness to similar distractors. Considering that convolutional layers in different levels characterize the object from different perspectives, we use multiple RNNs to model object structure in different levels respectively. Extensive experiments on three benchmarks, OTB100, TC-128 and VOT2015, show that the proposed algorithm outperforms other methods. Code is released at www.dabi.temple.edu/hbling/code/SANet/SANet.html. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Recent algorithmic improvements and hardware breakthroughs resulted in a number of success stories in the field of AI impacting our daily lives. However, despite its ubiquity AI is only just starting to make advances in what may arguably have the largest impact thus far, the nascent field of autonomous driving. In this work we discuss this important topic and address one of crucial aspects of the emerging area, the problem of predicting future state of autonomous vehicle's surrounding necessary for safe and efficient operations. We introduce a deep learning-based approach that takes into account current state of traffic actors and produces rasterized representations of each actor's vicinity. The raster images are then used by deep convolutional models to infer future movement of actors while accounting for inherent uncertainty of the prediction task. Extensive experiments on real-world data strongly suggest benefits of the proposed approach. Moreover, following successful tests the system was deployed to a fleet of autonomous vehicles. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> In this review, we provide an overview of emerging trends and challenges in the field of intelligent and autonomous, or self-driving, vehicles. Recent advances in the field of perception, planning, and decision-making for autonomous vehicles have led to great improvements in functional capabilities, with several prototypes already driving on our roads and streets. Yet challenges remain regarding guaranteed performance and safety under all driving circumstances. For instance, planning methods that provide safe and system-compliant performance in complex, cluttered environments while modeling the uncertain interaction with other traffic participants are required. Furthermore, new paradigms, such as interactive planning and end-to-end learning, open up questions regarding safety and reliability that need to be addressed. In this survey, we emphasize recent approaches for integrated perception and planning and for behavior-aware planning, many of which rely on machine learning. This raises the question of ver... <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Predicting trajectories of pedestrians is quintessential for autonomous robots which share the same environment with humans. In order to effectively and safely interact with humans, trajectory prediction needs to be both precise and computationally efficient. In this work, we propose a convolutional neural network (CNN) based human trajectory prediction approach. Unlike more recent LSTM-based moles which attend sequentially to each frame, our model supports increased parallelism and effective temporal representation. The proposed compact CNN model is faster than the current approaches yet still yields competitive results. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Discussion <s> Autonomous driving is a challenging multiagent domain which requires optimizing complex, mixed cooperative-competitive interactions. Learning to predict contingent distributions over other vehicles' trajectories simplifies the problem, allowing approximate solutions by trajectory optimization with dynamic constraints. We take a model-based approach to prediction, in order to make use of structured prior knowledge of vehicle kinematics, and the assumption that other drivers plan trajectories to minimize an unknown cost function. We introduce a novel inverse optimal control (IOC) algorithm to learn other vehicles' cost functions in an energy-based generative model. Langevin Sampling, a Monte Carlo based sampling algorithm, is used to directly sample the control sequence. Our algorithm provides greater flexibility than standard IOC methods, and can learn higher-level, non-Markovian cost functions defined over entire trajectories. We extend weighted feature-based cost functions with neural networks to obtain NN-augmented cost functions, which combine the advantages of both model-based and model-free learning. Results show that model-based IOC can achieve state-of-the-art vehicle trajectory prediction accuracy, and naturally take scene information into account. <s> BIB008
Most of the results from the available literature focus on generating abstract, high-level features of the observations found in the processed images, since, generally, the more abstract the feature the more robust it should be to transformations, noise, drift and other undesired artifacts and effects. Most authors rely on an arrangement of CNNs where each component has a distinct role in the system, such as learning appearance models, geometric and spatial patterns, of learning temporal dependencies. It is worthy of noting that a strictly CNN-based method needs substantial tweaking and careful parameter adjustment before it can accomplish the complex task of consistent detection in space and across multiple frames. A system made up of multiple networks, each with its own purpose, is also difficult to properly train, requiring lots of data and having a grater risk of overfitting. However, complex, customized CNN solutions still seem to provide the best accuracies within the current state-of-the-art. Most such results also use frame pairs, or only a few elements from the video sequence, thereby making them unreliable for long-term tracking. LSTM-based architectures seem to show more promising results for ensuring long-term temporal coherence, since this is what they were designed for, while also being simpler to implement and train. For the purposes of autonomous driving, an LSTM-based method shows promise, considering that training should happen offline and that a heavily-optimized solution is needed to achieve a realtime response. Designing such a system also requires a fair amount of trial-and error since currently there is no well established manner to predict which network architecture is suited to a particular purpose. There are also very few solutions based on reinforcement learning for object tracking, especially considering that reinforcement learning has gained substantial momentum in automotive decision making problems. Other less popular but promising solutions, such as GAN-based predictors, may be worthy of further study and experimentation. One particularly promising direction for automotive tracking are solutions that make use of limited sensor data and that are able to efficiently predict the surrounding environment without requiring a full representation or reconstruction of the scene. These approaches circumvent the need for lengthy video sequences, heavy image processing and the computation of complicated object features while being especially designed to handle occlusion and objects outside of the immediate field of view. As such, where automotive tracking is concerned, the available results from the state-of-the art seem to suggest that an effective solution would make use of partial data while being able to handle temporal correlations across lengthy sequences using an LSTM component. As of yet, solutions based on deep neural networks show the most promise since they offer the most robust features while being natively designed to solve focus-and-context problems in video sequences. In this sense, the results which seem most promising for the complex tracking problems described in this section are BIB001 , BIB002 , BIB003 and BIB004 . Rule-based approaches to vehicle interaction are rather inflexible; they require a great effort to engineer and validate, and they usually generalize poorly to new scenarios BIB008 . Learning-based approaches are promising because of the complexity of driving interactions, and the need for generalization. However, learning-based systems require a large amount of data to cover the space of interactive behaviors. Because they capture the generative structure of vehicle trajectories, model-based methods can potentially learn more, from less data, than model-free methods. However, good cost functions are challenging to learn, and simple, hand-crafted representations may not generalize well across tasks and contexts. In general, model-based methods can be less flexible, and may underperform model-free methods in the limit of infinite data. Model-free methods take a data-driven approach, aiming to learn predictive distributions over trajectories directly from data. These approaches are more flexible and require less knowledge engineering in terms of the type of vehicles, maneuvers, and scenarios, but the amount of data they require may be prohibitive BIB008 . Manually designed engineered models often impose unrealistic assumptions not supported by the data, e.g., that traffic always follows lanes, which motivated the use of learned models as an alternative. A large class of learned models are maneuver-based models, e.g., using hidden Markov models, which are object-centric approaches that predict the discrete actions of each object independently. Often, the independence assumption is not true, which is mitigated by the use of Bayesian networks that are computationally more expensive and not feasible for real-time tasks BIB005 . Gaussian Process regression can also be used to address the motion prediction problem. It has desirable properties such as the ability to quantify uncertainty, but it is limited when modeling complex actor-environment interactions BIB005 . Although it is possible to do multi-step prediction with a Kalman filter, it cannot be extended far into the future with reasonable accuracy. A multi-step prediction done solely by a Kalman filter was found to be accurate up until 10-15 timesteps, after which the predictions diverged and the full 40 timesteps prediction ended up being worse than constant velocity inference . This emphasizes the advantages of data-driven approaches, as it is possible to observe almost an infinite number of variables which may all affect the driver, whereas the Kalman filter relies solely on the physical movement of the vehicle. The data may also be a part of the problem, because the network learns what is present in the data, and hopefully generalizes well, but there may always be situations where the humans do not behave according to previous observations. This is one drawback of using neural networks. However, it seems that the advantages of using a data-driven approach outperform the disadvatages. Because of the time constraints of real-time systems, some authors use simpler feed-forward CNN architectures for prediction BIB005 . In general, deep CNNs as robust, flexible, high-capacity function approximators, are able to model the complex relationship between sensory input and reward structure very well. Additionally, due to the convolutional operators, they are able to capture spatial correlations in the data BIB006 . Some authors BIB007 state that CNNs are superior to LSTMs for temporal modeling since trajectories are continuous in nature, do not have complicated "state", and have high spatial and temporal correlations which can be exploited by computationally efficient convolution operations. Another approach is to learn policies from expert demonstrations by estimating the expert's cost function with inverse reinforcement learning and then extract a policy from that cost function BIB006 . However, this is often inefficient for real-time applications BIB005 . Finally, it should be mentioned that in this section, we have addressed the trajectory prediction problem. A related, but distinct problem, is trajectory planning, i.e. finding an optimal path from the current location to a given goal location. Its aim is to produce smooth trajectories with small changes in curvature, so as to minimize both the lateral and the longitudinal acceleration of the ego vehicle. For this purpose, there are several methods reported in the literature, e.g. using cubic spline interpolation, trigonometric spline interpolation, Bézier curves, or clothoids, i.e. curves with a complex mathematical definition, which have a linear relation between the curvature and the arc length and allow smooth transitions from a straight line to a circle arc or vice versa.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> We introduce a computationally efficient algorithm for multi-object tracking by detection that addresses four main challenges: appearance similarity among targets, missing data due to targets being out of the field of view or occluded behind other objects, crossing trajectories, and camera motion. The proposed method uses motion dynamics as a cue to distinguish targets with similar appearance, minimize target mis-identification and recover missing data. Computational efficiency is achieved by using a Generalized Linear Assignment (GLA) coupled with efficient procedures to recover missing data and estimate the complexity of the underlying dynamics. The proposed approach works with track lets of arbitrary length and does not assume a dynamical model a priori, yet it captures the overall motion dynamics of the targets. Experiments using challenging videos show that this framework can handle complex target motions, non-stationary cameras and long occlusions, on scenarios where appearance cues are not available or poor. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Many recent advances in multiple target tracking aim at finding a (nearly) optimal set of trajectories within a temporal window. To handle the large space of possible trajectory hypotheses, it is typically reduced to a finite set by some form of data-driven or regular discretization. In this work, we propose an alternative formulation of multitarget tracking as minimization of a continuous energy. Contrary to recent approaches, we focus on designing an energy that corresponds to a more complete representation of the problem, rather than one that is amenable to global optimization. Besides the image evidence, the energy function takes into account physical constraints, such as target dynamics, mutual exclusion, and track persistence. In addition, partial image evidence is handled with explicit occlusion reasoning, and different targets are disambiguated with an appearance model. To nevertheless find strong local minima of the proposed nonconvex energy, we construct a suitable optimization scheme that alternates between continuous conjugate gradient descent and discrete transdimensional jump moves. These moves, which are executed such that they always reduce the energy, allow the search to escape weak minima and explore a much larger portion of the search space of varying dimensionality. We demonstrate the validity of our approach with an extensive quantitative evaluation on several public data sets. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Online multi-object tracking aims at producing complete tracks of multiple objects using the information accumulated up to the present moment. It still remains a difficult problem in complex scenes, because of frequent occlusion by clutter or other objects, similar appearances of different objects, and other factors. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first propose the tracklet confidence using the detectability and continuity of a tracklet, and formulate a multi-object tracking problem based on the tracklet confidence. The multi-object tracking problem is then solved by associating tracklets in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive associations. Here, for reliable association between tracklets and detections, we also propose a novel online learning method using an incremental linear discriminant analysis for discriminating the appearances of objects. By exploiting the proposed learning method, tracklet association can be successfully achieved even under severe occlusion. Experiments with challenging public datasets show distinct performance improvement over other batch and online tracking methods. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth/death and appearance/disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark [24] to verify the effectiveness of our method. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Online Multiple Target Tracking (MTT) is often addressed within the tracking-by-detection paradigm. Detections are previously extracted independently in each frame and then objects trajectories are built by maximizing specifically designed coherence functions. Nevertheless, ambiguities arise in presence of occlusions or detection errors. In this paper we claim that the ambiguities in tracking could be solved by a selective use of the features, by working with more reliable features if possible and exploiting a deeper representation of the target only if necessary. To this end, we propose an online divide and conquer tracker for static camera scenes, which partitions the assignment problem in local subproblems and solves them by selectively choosing and combining the best features. The complete framework is cast as a structural learning task that unifies these phases and learns tracker parameters from examples. Experiments on two different datasets highlights a significant improvement of tracking performances (MOTA +10%) over the state of the art. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge. <s> BIB007 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper explores a pragmatic approach to multiple object tracking where the main focus is to associate objects efficiently for online and realtime applications. To this end, detection quality is identified as a key factor influencing tracking performance, where changing the detector can improve tracking by up to 18.9%. Despite only using a rudimentary combination of familiar techniques such as the Kalman Filter and Hungarian algorithm for the tracking components, this approach achieves an accuracy comparable to state-of-the-art online trackers. Furthermore, due to the simplicity of our tracking method, the tracker updates at a rate of 260 Hz which is over 20x faster than other state-of-the-art trackers. <s> BIB008 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper proposes an alternative formulation to the pure pursuit path tracking algorithm for autonomous driving. The current approach has tendencies to cut corners, and therefore results in poor path tracking accuracy. The proposed method considers not only the relative position of the pursued point, but also the orientation of the path at that point. A steering control law is designed in accordance with the kinematic equations of motion of the vehicle. The effectiveness of the algorithm is then tested by implementing it on an autonomous golf cart, driving in a pedestrian environment. The experimental result shows that the new algorithm reduces the root mean square (RMS) cross track error for the same given pre-programmed path by up to 46 percent, while having virtually no extra computational cost, and still maintaining the chatter free property of the original pure pursuit controller. <s> BIB009 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> To help accelerate progress in multi-target, multi-camera tracking systems, we present (i) a new pair of precision-recall measures of performance that treats errors of all types uniformly and emphasizes correct identification over sources of error; (ii) the largest fully-annotated and calibrated data set to date with more than 2 million frames of 1080 p, 60 fps video taken by 8 cameras observing more than 2,700 identities over 85 min; and (iii) a reference software system as a comparison baseline. We show that (i) our measures properly account for bottom-line identity match performance in the multi-camera setting; (ii) our data set poses realistic challenges to current trackers; and (iii) the performance of our system is comparable to the state of the art. <s> BIB010 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Many state-of-the-art approaches to multi-object tracking rely on detecting them in each frame independently, grouping detections into short but reliable trajectory segments, and then further grouping them into full trajectories. This grouping typically relies on imposing local smoothness constraints but almost never on enforcing more global ones on the trajectories.,,In this paper, we propose a non-Markovian approach to imposing global consistency by using behavioral patterns to guide the tracking algorithm. When used in conjunction with state-of-the-art tracking algorithms, this further increases their already good performance on multiple challenging datasets. We show significant improvements both in supervised settings where ground truth is available and behavioral patterns can be learned from it, and in completely unsupervised settings. <s> BIB011 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Tracking-by-detection is a common approach to multi-object tracking. With ever increasing performances of object detectors, the basis for a tracker becomes much more reliable. In combination with commonly higher frame rates, this poses a shift in the challenges for a successful tracker. That shift enables the deployment of much simpler tracking algorithms which can compete with more sophisticated approaches at a fraction of the computational cost. We present such an algorithm and show with thorough experiments its potential using a wide range of object detectors. The proposed method can easily run at 100K fps while outperforming the state-of-the-art on the DETRAC vehicle tracking dataset. <s> BIB012 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Reliable prediction of surround vehicle motion is a critical requirement for path planning for autonomous vehicles. In this paper, we propose a unified framework for surround vehicle maneuver classification and motion prediction that exploits multiple cues, namely, the estimated motion of vehicles, an understanding of typical motion patterns of freeway traffic and intervehicle interaction. We report our results in terms of maneuver classification accuracy and mean and median absolute error of predicted trajectories against the ground truth for real traffic data collected using vehicle mounted sensors on freeways. An ablative analysis is performed to analyze the relative importance of each cue for trajectory prediction. Additionally, an analysis of execution time for the components of the framework is presented. Finally, we present multiple case studies analyzing the outputs of our model for complex traffic scenarios. <s> BIB013 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> This paper introduces geometry and object shape and pose costs for multi-object tracking in urban driving scenarios. Using images from a monocular camera alone, we devise pairwise costs for object tracks, based on several 3D cues such as object pose, shape, and motion. The proposed costs are agnostic to the data association method and can be incorporated into any optimization framework to output the pairwise data associations. These costs are easy to implement, can be computed in real-time, and complement each other to account for possible errors in a tracking-by-detection framework. We perform an extensive analysis of the designed costs and empirically demonstrate consistent improvement over the state-of-the-art under varying conditions that employ a range of object detectors, exhibit a variety in camera and object motions, and, more importantly, are not reliant on the choice of the association framework. We also show that, by using the simplest of associations frameworks (two-frame Hungarian assignment), we surpass the state-of-the-art in multi-object-tracking on road scenes. More qualitative and quantitative results can be found at the following URL: this https URL <s> BIB014 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Radar sensor has been an integral part of safety critical applications in automotive industry owing to its weather and lighting independence. The advances in radar hardware technology have made it possible to reliably detect objects using radar. Highly accurate radar sensors are able to give multiple radar detections per object. This work presents a postprocessing architecture, which is used to cluster and track multiple detections from one object in practical multiple object scenarios. Furthermore, the framework is tested and validated with various driving maneuvers and results are evaluated. <s> BIB015 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Urban-oriented autonomous vehicles require a reliable perception technology to tackle the high amount of uncertainties. The recently introduced compact 3D LIDAR sensor offers a surround spatial information that can be exploited to enhance the vehicle perception. We present a real-time integrated framework of multi-target object detection and tracking using 3D LIDAR geared toward urban use. Our approach combines sensor occlusion-aware detection method with computationally efficient heuristics rule-based filtering and adaptive probabilistic tracking to handle uncertainties arising from sensing limitation of 3D LIDAR and complexity of the target object movement. The evaluation results using real-world pre-recorded 3D LIDAR data and comparison with state-of-the-art works shows that our framework is capable of achieving promising tracking performance in the urban situation. <s> BIB016 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> The problem of tracking multiple objects in a video sequence poses several challenging tasks. For tracking-by-detection, these include object re-identification, motion prediction and dealing with occlusions. We present a tracker (without bells and whistles) that accomplishes tracking without specifically targeting any of these tasks, in particular, we perform no training or optimization on tracking data. To this end, we exploit the bounding box regression of an object detector to predict the position of an object in the next frame, thereby converting a detector into a Tracktor. We demonstrate the potential of Tracktor and provide a new state-of-the-art on three multi-object tracking benchmarks by extending it with a straightforward re-identification and camera motion compensation. We then perform an analysis on the performance and failure cases of several state-of-the-art tracking methods in comparison to our Tracktor. Surprisingly, none of the dedicated tracking methods are considerably better in dealing with complex tracking scenarios, namely, small and occluded objects or missing detections. However, our approach tackles most of the easy tracking scenarios. Therefore, we motivate our approach as a new tracking paradigm and point out promising future research directions. Overall, Tracktor yields superior tracking performance than any current tracking method and our analysis exposes remaining and unsolved tracking challenges to inspire future research directions. <s> BIB017 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Traditional Algorithms and Methods Focusing on High-Performance <s> Online multi-object tracking (MOT) is extremely important for high-level spatial reasoning and path planning for autonomous and highly-automated vehicles. In this paper, we present a modular framework for tracking multiple objects (vehicles), capable of accepting object proposals from different sensor modalities (vision and range) and a variable number of sensors, to produce continuous object tracks. This work is a generalization of the MDP framework for MOT proposed by Xiang et al. , with some key extensions - First, we track objects across multiple cameras and across different sensor modalities. This is done by fusing object proposals across sensors accurately and efficiently. Second, the objects of interest (targets) are tracked directly in the real world . This is a departure from traditional techniques where objects are simply tracked in the image plane. Doing so allows the tracks to be readily used by an autonomous agent for navigation and related tasks. To verify the effectiveness of our approach, we test it on real world highway data collected from a heavily sensorized testbed capable of capturing full-surround information. We demonstrate that our framework is well-suited to track objects through entire maneuvers around the ego-vehicle, some of which take more than a few minutes to complete. We also leverage the modularity of our approach by comparing the effects of including/excluding different sensors, changing the total number of sensors, and the quality of object proposals on the final tracking result. <s> BIB018
The Kalman filter is a popular method with many applications in navigation and control, particularly with regard to predicting the future path of an object, associating multiple objects with their trajectories, while demonstrating significant robustness to noise. Generally, Kalman-based methods are used for simpler tracking, particularly in online scenarios where the tracker only accesses a limited number of frames at a time, possibly only the current and previous ones. An example of the use of the Kalman filter is BIB008 , where a combination of the aforementioned filter and the Munkres algorithm as the min-cost estimator are used in a simple setup focusing on performance. The method requires designing a dynamic model of the tracked objects' motion, and is much more sensitive to the type of detector employed than other approaches, however once such parameters are well established, the simplicity of the algorithms allows for significant real-time performance. Similar methods are frequently used in simple scenarios where a limited number of frames are available and the detections are accurate. In such situations, the simplicity of the implementations allows for quick response times even on low-spec embedded client devices. In the same spirit of providing an easy, straightforward method that works well for simple scenarios, BIB017 provide an approach based on bounding-box regression. Given multiple object bounding boxes in a sequence of frames, the authors develop a regressor which allows the prediction of bounding box positions in subsequent frames. This comes with some limitations, specifically it requires that targets move only slightly from frame to frame, and is therefore reliable in scenarios where the frame rate is high enough and relatively stable. Furthermore, a reliable detector is a must in such situations, and crowded scenes with frequent occlusion events are not handled properly. As with the previous approach, this is well suited for easy cases where robust image acquisition is available and performance and implementation simplicity are a priority. Unfortunately, noisy images are fairly common in automotive scenarios where, for efficiency and cost reasons, a compromise may be made in terms of the quality and performance of the cameras and sensors. It is often desirable that the software be robust to noise so as to minimize the hardware costs. In BIB004 , tracking is done by a particle filter for each track. The authors use the Munkres assignment algorithm between bounding boxes in the current input image and the previous bounding box for each track. A cost matrix is populated with the cost for associating a bounding box with any given previous bounding box: the Euclidean distance between the box centers plus the size change of the box, as a bounding box is expected to be roughly the same size in two consecutive frames. Since boxes move and change size in bigger increments when the actors are close to the camera, the cost is weighted by the inverse of the box size. This approach is simple, but the assignment algorithm has an O(n 3 ) complexity, which is probably too high for real-time tracking. Various attempts exist for improving noise robustness while maintaining performance, for example in BIB005 . In this case, the lifetime of tracked objects is modeled using a Markov Decision Process (MDP). The policy of the MDP is determined using reinforcement learning, whose objective is to learn a similarity function for associating tracked objects. The positions and lifetimes of the objects are modeled using transitions between MDP states. BIB018 also use MDPs in a more generalized scheme, involving multiple sensors and cameras and fusing the results from multiple MDP formulations. Note that Markov models can be limiting when it comes to automotive tracking, since a typical scene with multiple interacting targets does not exhibit the Markov property where the current state only depends on the previous one. In this regard, the related literature features multiple attempts to improve reliability. BIB013 propose an elaborate pipeline featuring multiview tracking, ground plane projection, maneuver recognition and trajectory prediction using an assortment of approaches which include Hidden Markov Models and Variational Gaussian mixture models. Such efforts show that an improvement over traditional algorithms involves sequencing together multiple different methods, each with its own role. As such, there is the risk that the overall resulting approach may be too fragmented and too cumbersome to implement, interpret and improve properly. Works such as BIB011 attempt to circumvent such limitations by proposing alternatives to tried-andtested Markov models, in this case in the form of a system which determines behavioral patterns in an effort to ensure global consistency for tracking results. There are multiple ways to exploit behavior in order to guide the tracking process, for instance by learning and minimizing/maximizing an energy function that associates behavioral patterns to potential trajectory candidates. This concept is also exemplified by BIB002 , who propose a method based on minimizing a continuous energy function aimed at handling the very large space of potential trajectory solutions, considering that a limited, discrete set of behavior patterns impose limitations on the energy function. While such a limitation offers better guarantees that a global optimum will eventually be reached, it may not allow a complete representation of the system. An alternative approach which is also designed to handle occlusions is BIB006 , where the divide-andconquer paradigm is used to partition the solution space into smaller subsets, thereby optimizing the search for the optimal variant. The authors note that while detections and their respective trajectories can be extracted rather efficiently from crowded scenes, the presence of ambiguities induced by occlusion events may raise significant detection errors. The proposed solution involves subdividing the object assignment problem into subproblems, followed by a selective combination of the best features found within the subdivisions ( Figure 5 ). The number and types of the features are variable, thereby accounting for some level of flexibility for this approach. One particular downside is that once the scene changes, the problem itself also changes and the subdivisions need to reoccur and update, therefore making this method unsuitable for scenes acquired from moving cameras. A similar problem is posed in BIB003 , where it is also noted that complex scenes pose tracking difficulties due to occlusion events and similarities among different objects. This issue is handled by subdividing object trajectories into multiple tracklets and subsequently determining a confidence level for each such tracklet, based on its detectability and continuity. Actual trajectories are then formed from tracklets connected based on their confidence values. One advantage of this method in terms of performance is that tracklets can be added to already-determined trajectories in real-time as they become available without requiring complex processing or additional associations. Additionally, linear discriminant analysis is used to differentiate objects based on appearance criteria. The concept of appearance is more extensively exploited by BIB001 , who use motion dynamics to distinguish between targets with similar features. They approach the problem by determining a dynamics-based similarity between tracklets using generalized linear assignment. As such, targets are identified using motion cues, which are complementary to more well established appearance models. While demonstrating adequate performance and accuracy, it is worth mentioning that motion-based features are sensitive to camera movement and are considerably mode difficult to use in automotive situations, Figure 5 : An example of a divide-and-conquer approach which creates associations between detections BIB006 where motion assessment metrics that work well for static cameras may be less reliable when the cameras are in motion and image jittering and shaking occur. The idea of generating appearance models using traditional means is exemplified in BIB007 , who use a combination appearance models learned using a regularized least squares framework and a system for generating potential solution candidates in the form of a set of track hypotheses for each successful detection. The hypotheses are arranges in trees, each of which are scored and selected according to the best fit in terms of providing usable trajectories. An alternative to constructing an elaborate appearance model is proposed by BIB014 , who directly involve the shape and geometry of the detections within the tracking process, therefore using shape-based cost functions instead of ones based on pixel clusters. Furthermore, results focusing on tracking-while-driving problems may opt for a vehicle behavior model, or a kinematic model, as opposed to one that is based on appearance criteria. Examples of such approaches are BIB009 , BIB015 , where the authors build models of vehicle behavior from parameters such as steering angles, headings, offset distances, relative positions etc. Note that kinematic and motion models are generally more suited to situations where the input consists in data from radar, LiDAR or GPS, as opposed to image sequences. In particular, attempting to reconstruct visual information from LiDAR point clouds is not a trivial task and may involve elaborate reconstruction, segmentation and registration preprocessing before a suitable detection and tracking pipeline can be designed BIB016 . Another class of results from related literature follows a different paradigm. Instead of employing complex energy minimization functions and/or statistical modeling, other authors opt for a simpler, faster approach that works with a limited amount of information drawn from the video frames. The motivation is that in some cases the scenarios may be simple enough that a straightforward method that alleviates the need for extended processing may prove just as effective as more complex and elaborate counterparts. An example in this direction is BIB012 whose method is based on scoring detections by determining overlaps between their bounding boxes across multiple consecutive frames. A scoring system is then developed based on these overlaps and, depending on the resulting scores, trajectories are formed from sets of successive overlaps of the same bounding boxes. Such a method does not directly handle crowded scenes, occlusions or fast moving objects whose positions are far apart in consecutive frames, however it may present a suitable compromise in terms of accuracy in scenarios where performance is detrimental and the embedded hardware may not allow for more complex processing. An additional important consideration for this type of problem is how the tracking method is evaluated. Most authors use a common, established set of benchmarks which, while having a certain degree of generality, cannot cover every situation that a vehicle might be found in. As such, some authors such as BIB010 devote their work to developing performance and evaluation metrics and data sets which allow for covering a wide range of potential problems which may arise in MOT scenarios. As such, the choice in the method used for tracking is as much a consequence of the diversity of situations and events claimed to be covered by the method, as it results from the evaluation performed by the authors. For example, as was the case for NN-based methods, most evaluations are done for scenes with static cameras, which are only partly relevant for automotive applications. The advantage of the methods presented thus far lies in the fact that they generally outperform their counterparts in terms of the required processing power and computational resources, which is a plus for vehicle-based tracking where the client device is usually a low-power solution. Furthermore, some methods can be extended rather easily, as the need may be, for instance by incorporating additional features or criteria when assembling trajectories from individual detections, by finding an optimizer that ensures additional robustness, or, as is already the case with some of the previously-mentioned papers, by incorporating a light-weight supervised classifier in order to boost detection and tracking accuracy.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> We analyze the computational problem of multi-object tracking in video sequences. We formulate the problem using a cost function that requires estimating the number of tracks, as well as their birth and death states. We show that the global solution can be obtained with a greedy algorithm that sequentially instantiates tracks using shortest path computations on a flow network. Greedy algorithms allow one to embed pre-processing steps, such as nonmax suppression, within the tracking algorithm. Furthermore, we give a near-optimal algorithm based on dynamic programming which runs in time linear in the number of objects and linear in the sequence length. Our algorithms are fast, simple, and scalable, allowing us to process dense input data. This results in state-of-the-art performance. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> Data association is an essential component of any human tracking system. The majority of current methods, such as bipartite matching, incorporate a limited-temporal-locality of the sequence into the data association problem, which makes them inherently prone to IDswitches and difficulties caused by long-term occlusion, cluttered background, and crowded scenes.We propose an approach to data association which incorporates both motion and appearance in a global manner. Unlike limited-temporal-locality methods which incorporate a few frames into the data association problem, we incorporate the whole temporal span and solve the data association problem for one object at a time, while implicitly incorporating the rest of the objects. In order to achieve this, we utilize Generalized Minimum Clique Graphs to solve the optimization problem of our data association method. Our proposed method yields a better formulated approach to data association which is supported by our superior results. Experiments show the proposed method makes significant improvements in tracking in the diverse sequences of Town Center [1], TUD-crossing [2], TUD-Stadtmitte [2], PETS2009 [3], and a new sequence called Parking Lot compared to the state of the art methods. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> We cast the problem of tracking several people as a graph partitioning problem that takes the form of an NP-hard binary integer program. We propose a tractable, approximate, online solution through the combination of a multi-stage cascade and a sliding temporal window. Our experiments demonstrate significant accuracy improvement over the state of the art and real-time post-detection performance. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> Multi-target tracking is an interesting but challenging task in computer vision field. Most previous data association based methods merely consider the relationships (e.g. appearance and motion pattern similarities) between detections in local limited temporal domain, leading to their difficulties in handling long-term occlusion and distinguishing the spatially close targets with similar appearance in crowded scenes. In this paper, a novel data association approach based on undirected hierarchical relation hypergraph is proposed, which formulates the tracking task as a hierarchical dense neighborhoods searching problem on the dynamically constructed undirected affinity graph. The relationships between different detections across the spatiotemporal domain are considered in a high-order way, which makes the tracker robust to the spatially close targets with similar appearance. Meanwhile, the hierarchical design of the optimization process fuels our tracker to long-term occlusion with more robustness. Extensive experiments on various challenging datasets (i.e. PETS2009 dataset, ParkingLot), including both low and high density sequences, demonstrate that the proposed method performs favorably against the state-of-the-art methods. <s> BIB004 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> Multi-object tracking has been recently approached with the min-cost network flow optimization techniques. Such methods simultaneously resolve multiple object tracks in a video and enable modeling of dependencies among tracks. Min-cost network flow methods also fit well within the “tracking-by-detection” paradigm where object trajectories are obtained by connecting per-frame outputs of an object detector. Object detectors, however, often fail due to occlusions and clutter in the video. To cope with such situations, we propose to add pairwise costs to the min-cost network flow framework. While integer solutions to such a problem become NP-hard, we design a convex relaxation solution with an efficient rounding heuristic which empirically gives certificates of small suboptimality. We evaluate two particular types of pairwise costs and demonstrate improvements over recent tracking methods in real-world video sequences. <s> BIB005 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> The past decade has witnessed significant progress in object detection and tracking in videos. In this paper, we present a collaborative model between a pre-trained object detector and a number of single-object online trackers within the particle filtering framework. For each frame, we construct an association between detections and trackers, and treat each detected image region as a key sample, for online update, if it is associated to a tracker. We present a motion model that incorporates the associated detections with object dynamics. Furthermore, we propose an effective sample selection scheme to update the appearance model of each tracker. We use discriminative and generative appearance models for the likelihood function and data association, respectively. Experimental results show that the proposed scheme generally outperforms state-of-the-art methods. <s> BIB006 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Methods Based on Graphs and Flow Models <s> The majority of Multi-Object Tracking (MOT) algorithms based on the tracking-by-detection scheme do not use higher order dependencies among objects or tracklets, which makes them less effective in handling complex scenarios. In this work, we present a new near-online MOT algorithm based on non-uniform hypergraph, which can model different degrees of dependencies among tracklets in a unified objective. The nodes in the hypergraph correspond to the tracklets and the hyperedges with different degrees encode various kinds of dependencies among them. Specifically, instead of setting the weights of hyperedges with different degrees empirically, they are learned automatically using the structural support vector machine algorithm (SSVM). Several experiments are carried out on various challenging datasets (i.e., PETS09, ParkingLot sequence, SubwayFace, and MOT16 benchmark), to demonstrate that our method achieves favorable performance against the state-of-the-art MOT methods. <s> BIB007
A significant number of results from the related literature present the tracking solution as a graph search problem or otherwise model the tracking scene using a dependency graph or flow model. There are multiple advantages to using such an approach: graph-based models tailor well to the multitracking problem since, like a graph, it is formed from inter-related nodes each with a distinct set of parameter values. The relationships that can be determined among tracked objects or a set of trajectory candidates can be modeled using edges with edge costs. Graph theory is well understood and graph traversal and search algorithms can be widely found, with implementations readily available on most platforms. Likewise, flow models can be seen as an alternative interpretation of graphs, with node dependencies modeled through operators and dependency functions, forming an interconnected system. Unlike a traditional graph, data from a flow model progresses in an established direction which starts from initial components where acquired data is handled as input; the data then traverses intermediate nodes where it is processed in some manner and ends up at terminal nodes where the results are obtained and exploited. Like graphs, flow models allow for loops which implement refinement techniques and in-depth processing via multiple local iterations. Most methods which exploit graphs and flow models attempt to solve the tracking problem using a minimum path or minimum cost -type approach. An example in this sense is BIB005 , where multiobject tracking is modeled using a network flow model subjected to min-cost optimization. Each path through the flow model represents a potential trajectory, formed by concatenating individual detections from each frame. Occlusion events are modeled as multiple potential directions arising from the occlusion node and the proposed solution handles the resulting ambiguities by incorporating pairwise costs into the flow network. A more straightforward solution is presented by , who solve multi-tracking using dynamic programming and formulate the scenario as a linear program. They subsequently handle the large number of resulting variables and constraints using k-shortest paths. One advantage of this method seems to be that it allows for reliable tracking from only four overlapping low resolution low fps video streams, which is in line with the cost-effectiveness required by automotive applications. Another related solution is BIB001 , where a cost function is developed from estimating the number of potential trajectories as well as their origins and end frames. Then, the scenario is handled as a shortest-path problem in a graph which the authors solve using a greedy algorithm. This approach has the advantage that it uses well-established methods, therefore affording some level of simplicity to understanding and implementing the algorithms. In BIB003 , a similar graph-based solution divides the problem into multiple subproblems by exploring several graph partitioning mechanisms and uses greedy search based on Adaptive Label Iterative Conditional Modes. Partitioning allows for successful disassociation of object identities in circumstances where said identities might be confused with one another. Also, methods based on solution space partitioning have the advantage of being highly scalable, therefore allowing fine tuning of their parameters in order to achieve a trade-off between accuracy and performance. Multiple extensions of the graph-based problem exists in the related literature, for instance when multiple other criteria are incorporated into the search method. BIB002 incorporate appearance and motion-based cues into their data association mechanism, which is modeled using a global graph representation and makes use of Generalized Minimum Clique Graphs to locate representative tracklets in each frame. Among other advantages, this allows for a longer time span to be handled, albeit for each object individually. Another related approach is provided in BIB006 , where the solution consists in a collaborative model which makes use of a detector and multiple individual trackers, whose interdependencies are determined by finding associations with key samples from each detected region in the processed frames. These interdependencies are further exploited via a sample selection method to generate and update appearance models for each tracker. As extensions of the more traditional graph-based models which use greedy algorithms to search for suitable candidate solutions and update the resulting models in subsequent processing steps, Figure 6 : Generation of trajectories by determining higher order dependencies between tracklets via a hypergraph model with edge shapes determined using a learning method BIB007 some authors handle the problem using hypergraphs. These extend the concept of classical graphs by generalizing the role of graph edges. In a conventional graph an edge joins two nodes, while in a hypergraph edges are sets of arbitrary combinations of nodes. Therefore an edge in a hypergraph connects to multiple nodes, instead of just two as in the traditional case. This structure has the potential to form more extensive and complete models using a singular unified concept and to alleviate the need for costly solution space partitioning or subdivision mechanisms. Another use of the hypergraph concept is provided by BIB004 , who build a hypergraph-based model to generate meaningful data associations capable of handling the problem of targets with similar appearance and in close proximity to one-another, a situation frequently encountered in crowded scenes. The hypergraph model allows for the formulation of higher-order relationships among various detections, which, as mentioned in previous sections, have the potential to ensure robustness against simple transformations, noise and various other spatial and temporal inaccuracies. The method is based on grouping dense neighborhoods of tracklets hierarchically, forming multiple layers which enable more fine-grained descriptions of the relationships that exists in each such neighborhood. A related but much more recent result BIB007 is also based on the notion that hypergraphs allow for determining higher order dependencies among tracklets, but in this case the parameters of the hypergraph edges are learned using an SSVM (structural support vector machine), as opposed to being determined empirically. Trajectories are established as a result of determining higher order dependencies by rearranging the edges of the hypergraph so as to conform to several constraints and affinity criteria. While demonstrating robustness to affine transforms and noise, such methods still cannot handle complex crowded scenes with multiple occlusions and, compared to previously-mentioned methods, suffer some penalties in terms of performance, since updating the various parameters of hypergraph edges can be computationally costly.
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Trajectory Prediction Methods <s> Predicting other traffic participants trajectories is a crucial task for an autonomous vehicle, in order to avoid collisions on its planned trajectory. It is also necessary for many Advanced Driver Assistance Systems, where the ego-vehicle's trajectory has to be predicted too. Even if trajectory prediction is not a deterministic task, it is possible to point out the most likely trajectory. This paper presents a new trajectory prediction method which combines a trajectory prediction based on Constant Yaw Rate and Acceleration motion model and a trajectory prediction based on maneuver recognition. It takes benefit on the accuracy of both predictions respectively a short-term and long-term. The defined Maneuver Recognition Module selects the current maneuver from a predefined set by comparing the center lines of the road's lanes to a local curvilinear model of the path of the vehicle. The overall approach was tested on prerecorded human real driving data and results show that the Maneuver Recognition Module has a high success rate and that the final trajectory prediction has a better accuracy. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Trajectory Prediction Methods <s> This paper describes an integrated Bayesian approach to maneuver-based trajectory prediction and criticality assessment that is not limited to specific driving situations. First, a distribution of high-level driving maneuvers is inferred for each vehicle in the traffic scene via Bayesian inference. For this purpose, the domain is modeled in a Bayesian network with both causal and diagnostic evidences and an additional trash maneuver class, which allows the detection of irrational driving behavior and the seamless application from highly structured to nonstructured environments. Subsequently, maneuver-based probabilistic trajectory prediction models are employed to predict each vehicle's configuration forward in time. Random elements in the designed models consider the uncertainty within the future driving maneuver execution of human drivers. Finally, the criticality time metric time-to-critical-collision-probability (TTCCP) is introduced and estimated via Monte Carlo simulations. The TTCCP is a generalization of the time-to-collision (TTC) in arbitrary uncertain multiobject driving environments and valid for longer prediction horizons. All uncertain predictions of all maneuvers of every vehicle are taken into account. Additionally, the criticality assessment considers arbitrarily shaped static environments, and it is shown how parametric free space (PFS) maps can advantageously be utilized for this purpose. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Trajectory Prediction Methods <s> To safely and efficiently navigate through complex traffic scenarios, autonomous vehicles need to have the ability to predict the future motion of surrounding vehicles. Multiple interacting agents, the multi-modal nature of driver behavior, and the inherent uncertainty involved in the task make motion prediction of surrounding vehicles a challenging problem. In this paper, we present an LSTM model for interaction aware motion prediction of surrounding vehicles on freeways. Our model assigns confidence values to maneuvers being performed by vehicles and outputs a multi-modal distribution over future motion based on them. We compare our approach with the prior art for vehicle motion prediction on the publicly available NGSIM US-101 and I-80 datasets. Our results show an improvement in terms of RMS values of prediction error. We also present an ablative analysis of the components of our proposed model and analyze the predictions made by the model in complex traffic scenarios. <s> BIB003 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Trajectory Prediction Methods <s> Predicting trajectories of pedestrians is quintessential for autonomous robots which share the same environment with humans. In order to effectively and safely interact with humans, trajectory prediction needs to be both precise and computationally efficient. In this work, we propose a convolutional neural network (CNN) based human trajectory prediction approach. Unlike more recent LSTM-based moles which attend sequentially to each frame, our model supports increased parallelism and effective temporal representation. The proposed compact CNN model is faster than the current approaches yet still yields competitive results. <s> BIB004
Autonomous cars need to have the ability to predict the future motion of surrounding vehicles in order to navigate through complex traffic scenarios safely and efficiently. The existence of multiple interacting agents, the multi-modal nature of driver behavior, and the inherent uncertainty involved make motion prediction a challenging problem. An autonomous vehicle deployed in complex traffic needs to balance two factors: the safety of humans in and around it, and efficient motion without stalling traffic. The vehicle should also take the initiative, such as deciding when to change lanes, cross unsignalized intersections, or overtake other vehicles BIB003 . This requires the autonomous car to have some ability to reason about the future state of the environment. Other difficulties come from that requirements that such a system must be sensitive to exceptional, rarely happening situations. It should not only consider physical quantities but also information about the drivers' intentions and, because of the great number of possibilities involved, it should take into account only a reasonable subset of possible future scene evolutions BIB002 . One way to plan a safe maneuver is to understand the intent of other traffic participants, i.e. the combination of discrete high-level behaviors as well as the continuous trajectories describing future motion . Predicting other traffic participants trajectories is a crucial task for an autonomous vehicle, in order to avoid collisions on its planned trajectory. Even if trajectory prediction is not a deterministic task, it is possible to specify the most likely trajectory BIB001 . Certain considerations about vehicle dynamics can provide partial knowledge on the future. For instance, a vehicle moving at a given speed needs a certain time to fully stop and the curvature of its trajectory has to be under a certain value in order to keep stability. On the other hand, even if each driver has its own habits, it is possible to identify some common driving maneuvers based on traffic rules, or to assume that drivers keep some level of comfort while driving BIB001 . In order to effectively and safely interact with humans, trajectory prediction needs to be both precise and computationally efficient BIB004 . A recent white paper states that a solution for the prediction and planning tasks of an autonomous car may consider a combination of the following properties: • Predicting only a short time into the future. The likelihood of an accurate prediction is indirectly related to the time between the current state and the point in time it refers to, i.e. the further the predicted state is in the future, the less likely it is that the prediction is correct; • Relying on physics where possible, using dynamic models of road users that form the basis of motion prediction. A classification of relevant objects is a necessary input to be able to discriminate between various models; • Considering the compliance of other road users with traffic rules to a valid extent. For example, the ego car should cross intersections with green traffic lights without stopping, relying on other road users to follow the rule of stopping at red lights. In addition to this, foreseeable non-compliant behavior to traffic rules, e.g. pedestrians crossing red lights in urban areas, needs to be taken into account, supported by defensive drive planning; • Predicting the situation to further increase the likelihood of road user prediction being correct. For example, the future behavior of other road users when driving in a traffic jam differs greatly to their behavior in flowing traffic. Further, it asserts that the interpretation and prediction system should understand not only the worstcase behavior of other road users (possible vulnerable ones, i.e. who may not obey all traffic rules), but their worst-case reasonable behavior. This allows it to make reasonable and physically possible assumptions about other road users. The automated driving system should make a naturalistic assumption, just as humans do, about the reasonable behavior of others. These assumptions need to be adaptable to local requirements so that they meet locally different "driving cultures".
A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Problem Description <s> Predicting other traffic participants trajectories is a crucial task for an autonomous vehicle, in order to avoid collisions on its planned trajectory. It is also necessary for many Advanced Driver Assistance Systems, where the ego-vehicle's trajectory has to be predicted too. Even if trajectory prediction is not a deterministic task, it is possible to point out the most likely trajectory. This paper presents a new trajectory prediction method which combines a trajectory prediction based on Constant Yaw Rate and Acceleration motion model and a trajectory prediction based on maneuver recognition. It takes benefit on the accuracy of both predictions respectively a short-term and long-term. The defined Maneuver Recognition Module selects the current maneuver from a predefined set by comparing the center lines of the road's lanes to a local curvilinear model of the path of the vehicle. The overall approach was tested on prerecorded human real driving data and results show that the Maneuver Recognition Module has a high success rate and that the final trajectory prediction has a better accuracy. <s> BIB001 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Problem Description <s> We introduce a Deep Stochastic IOC RNN Encoder-decoder framework, DESIRE, for the task of future predictions of multiple interacting agents in dynamic scenes. DESIRE effectively predicts future locations of objects in multiple scenes by 1) accounting for the multi-modal nature of the future prediction (i.e., given the same context, future may vary), 2) foreseeing the potential future outcomes and make a strategic prediction based on that, and 3) reasoning not only from the past motion history, but also from the scene context as well as the interactions among the agents. DESIRE achieves these in a single end-to-end trainable neural network model, while being computationally efficient. The model first obtains a diverse set of hypothetical future prediction samples employing a conditional variational auto-encoder, which are ranked and refined by the following RNN scoring-regression module. Samples are scored by accounting for accumulated future rewards, which enables better long-term strategic decisions similar to IOC frameworks. An RNN scene context fusion module jointly captures past motion histories, the semantic scene context and interactions among multiple agents. A feedback mechanism iterates over the ranking and refinement to further boost the prediction accuracy. We evaluate our model on two publicly available datasets: KITTI and Stanford Drone Dataset. Our experiments show that the proposed model significantly improves the prediction accuracy compared to other baseline methods. <s> BIB002 </s> A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving <s> Problem Description <s> Recent algorithmic improvements and hardware breakthroughs resulted in a number of success stories in the field of AI impacting our daily lives. However, despite its ubiquity AI is only just starting to make advances in what may arguably have the largest impact thus far, the nascent field of autonomous driving. In this work we discuss this important topic and address one of crucial aspects of the emerging area, the problem of predicting future state of autonomous vehicle's surrounding necessary for safe and efficient operations. We introduce a deep learning-based approach that takes into account current state of traffic actors and produces rasterized representations of each actor's vicinity. The raster images are then used by deep convolutional models to infer future movement of actors while accounting for inherent uncertainty of the prediction task. Extensive experiments on real-world data strongly suggest benefits of the proposed approach. Moreover, following successful tests the system was deployed to a fleet of autonomous vehicles. <s> BIB003
To tackle the trajectory prediction task, one should assume to have access to real-time data streams coming from sensors such as lidar, radar or camera, installed aboard the self-driving vehicle and that there already exists a functioning tracking system that allows detection and tracking of traffic actors in real-time. Examples of pieces of information that describe an actor are: bounding box, position, velocity, acceleration, heading, and heading change rate. It may also be needed to have mapping data of the area where the ego car is driving, i.e. road and crosswalk locations, lane directions, and other relevant map information. Past and future positions are represented in an ego car-centric coordinate system. Also, one needs to model the static context with road and crosswalk polygons, as well as lane directions and boundaries: road polygons describe drivable surface, lanes describe the driving path, and crosswalk polygons describe the road surface used for pedestrian crossing BIB003 . An example of available information on which the prediction module can operate is presented in Figure 7 . More formally, considering the future as a consequence of a series of past events, a prediction entails reasoning about probable outcomes based on past observations BIB002 . Let X i t be a vector with the spatial coordinates of actor i at observation time t, with t ∈ {1, 2, ..., T obs }, where T obs is the present time step in the series of observations. The past trajectory of actor i is a sequence }. Based on the past trajectories of all actors, one needs to estimate the future trajectories of all actors, i.e. It is also possible to first generate the trajectories in the Frenet frame along the current lane of the vehicle, then convert it to the initial Cartesian coordinate system BIB001 . The Frenet coordinate system is useful to simplify the motion equations when cars travel on curved roads. It consists of longitudinal and lateral axes, denoted as s and d, respectively. The curve that goes through the center of the road determines the s axis and indicates how far along the car is on the road. The d axis indicates the lateral displacement of the car. d is 0 on the center of the road and its absolute value increases with the distance from the center. Also, it can be positive or negative, depending on the side of the road.