Recent Changes - Search:

2009-2010

June 25, 2010, 11:00
Optimization algorithms: A statistical signal processing perspective
Easter Selvan, National Research Council (Napoli—Italy)

Few applications of biologically-inspired/heuristic and a set of unconstrained deterministic optimization algorithms are presented. First, a stochastic mammogram model - a breast cancer risk predictor - constructed with the parameter estimates resulted from heuristic algorithms is shown to outclass the model based on the expectation maximization (EM) algorithm. Next, a simple approach for implicit imposition of the orthonormality constraint while estimating the independent components (ICs) is briefed; this technique exemplifies the use of a global optimizer that handles the orthonormality constraint in a more natural way, when the optimization landscape remains non-convex. Finally, a set of unconstrained deterministic optimizers tailored to work with a trivial manifold is reported to noticeably improve the unmixing and segmentation results in our experiments, as a consequence of relaxing the orthonormality constraint among the ICs.

May 31, 2010
DYSCO study day
(details to follow)

May 26-28, 2010
Research retreat
Claire Fontaine, La Roche en Ardenne

http://www.clairefontaine.be/

May 7, 2010, 11:00
Equilibrium Solution to the Reversed Auction Game
Sebastian Bernhardsson, Umeå University (Sweden)

Game theory attempts to mathematically capture behavior in strategic situations, in which an individuals success in making choices depends on the choices of others. The game is defined by a set of rules which determines the possible actions a player can take, and a strategy is the player's complete plan of taking those actions. In this talk I will present the lowest unique positive integer game (reversed auction) and address the equilibrium concept so that no one can enhance their individual payoff by a unilateral change when all the others follow a certain strategy.
The game consists of n players choosing a number between 1 and n. The player with the lowest number which no one else chose wins. In this approach the combinatorial possibilities to consider become very much involved even for a small number of players, which has hindered a precise analysis in previous works. I will here present a systematic way to reach the equilibrium solution for a general number of players, and show that this game is an example of conflict between the group and the individual interests. I will also show the result of a small experiment with real human players.

April 30, 2010, 11:00
Cerny's Conjecture
Narad Rampersad, University of Waterloo and Université de Liège

In the 1960's, Cerny conjectured that a synchronizing automaton with n states has a reset word of length at most (n-1)^2. The conjecture is still open and its resolution is one of the most well-known and long-standing open problems in automata theory. In this talk we give an overview of the conjecture and partial progress towards its resolution. We also discuss the related "road colouring problem", which was solved by Trakhtman in 2007.

An automaton can be viewed as a directed multigraph where every vertex has constant outdegree k, and the outgoing arcs at each vertex are labeled by distinct elements of {1,2,...,k}. Such an automaton is synchronizing if there is a vertex v and a sequence of edge labels (called the reset word) such that, no matter what vertex we start from, following the sequence of edges described by the reset word always takes us to the vertex v. Cerny's conjecture is that a synchronizing automaton with n vertices has a reset word of length at most (n-1)^2. Currently, the best known result in this direction is that a synchronizing automaton has a reset word of length at most (n^3 - n)/6, which can be found using a "greedy" algorithm.

April 23, 2010, 11:00
A graph-theory based approach to graph mining
Jan Ramon, KULeuven

In the field of data mining, one has seen during the last decade an increasing interest in using graphs to represent data. They hit a good balance between expressivity and tractability. Still, most research in this area was rather practical. In our current work, we aim at applying results from graph theory and combinatorical optimisation to advance the state of the art in graph mining.

In this seminar, first the field of graph mining will be introduced. Then, a number of theory-related challenges, recent results and ideas will be discussed. In particular, the main focus will be on the different forms of pattern mining, which can be considered as basic tools for collecting statistics. Even though many problems on graphs are untractable, often there exist tractable variants which are acceptable for the application under consideration, properties of the application domain one can exploit, or good approximation techniques. This discussion will be complemented with illustrating examples, such as in in the fields of chemo-informatics, image recognition, traffic networks and web mining.

April 19, 2010, 15:30
From Gauss to Google: why matrices matter
Paul Van Dooren (Francqui Chair, in Antwerp)

Prof.dr. Paul Van Dooren (Université Catholique de Louvain) has been awarded the Belgian Francqui Chair 2009-2010. As holder of this chair he will present at the University of Antwerp the lecture series "Matrix methods for systems and control". The lecture series commences on Monday, April 19, 2010 at 3.30 p.m. with the inaugural lecture "From Gauss to Google: why matrices matter". Rector Alain Verschoren and prof.dr. Herwig Leirs (dean Faculty of Sciences) are pleased to invite you to this occasion. The inaugural lecture will take place at G.010-Jan Fabre Auditorium, building G, Campus Middelheim, Middelheimlaan 1, Antwerp. You can register for this occasion before April 1 by email to Martine Vermeiren (martine.vermeiren@ua.ac.be), mentioning the number of participants.

The lecture series continues with

Friday, April 23, 2010: Distance problems, spectra and pseudospectra.
Friday, April 30, 2010: Model reduction of dynamical systems.
Friday, May 07, 2010: Dominant feature extraction and structured matrices.
Friday, May 21, 2010: Networks and graphs.

All these lectures are delivered from 2 to 5.30 p.m. in room G.005 at Campus Middelheim including a coffee/tea break. Please confirm also your attendance at these lectures with Martine Vermeiren. Further information is provided on the attached flyer and can be obtained from prof.dr. Karel In ´t Hout (karel.inthout@ua.ac.be).

April 16, 2010, 11:00
Structured sparse principal component analysis and dictionary learning
Francis Bach, École Normale Supérieure de Paris

We present an extension of sparse PCA, or sparse dictionary learning, where the sparsity patterns of all dictionary elements or decomposition coefficients are structured and constrained to belong to a prespecified set of shapes. This structured sparse PCA is based on a struc- tured regularization recently introduced by Jenat- ton et al. (2009). While classical sparse priors only deal with cardinality, the regularization we use encodes higher-order information about the data. We propose an efficient and simple opti- mization procedure to solve this problem. Ex- periments with three practical tasks, the denoising of sparse structured signals, face recognition, and topic models for text documents, demonstrate the benefits of the proposed structured approach over unstructured approaches. (joint work with R. Jenatton, J. Mairal and G. Obozinski).

April 09, 2010, 11:00
A fast algorithm for updating and downsizing the dominant kernel principal components
Nicola Mastronardi, CNR Bari

N. Mastronardi (CNR, Bari, Italy), joint work with E. Tyrtyshnikov and P. Van Dooren

Many important kernel methods in the machine learning area, such as kernel principal component analysis, feature approximation, denoising, compression and prediction require the computation of the dominant set of eigenvectors of the symmetric kernel Gram matrix. Recently, an efficient incremental approach was presented for the fast calculation of the dominant kernel eigenbasis. In this talk we propose faster algorithms for incrementally updating and downsizing the dominant kernel eigenbasis. These methods are well-suited for large scale problems since they are both efficient in terms of complexity and data management.

[1] G. Gins, I. Smets, and J. Van Impe, Efficient tracking of the dominant eigenspace of a normalized kernel matrix, Neural. Comput., 20, (2008), pp. 523-554.
[2] L. Hoegaerts, L. De Lathauwer, I. Goethals, J. Suykens, J. Vandewalle, and B. De Moor, Efficiently updating and tracking the dominant kernel principal components, Neural Network., 20, (2007), pp. 220-229.

March 15, 2010, 14:00
Independent component analysis and functional Magnetic Resonance Imaging
Ingrid Daubechies, Princeton University

Independent Component Analysis (ICA), a method for separating a mixture of different components into its constituents, has been proposed for a variety of different applications, including functional Magnetic Resonance Imaging (fMRI) of brain processes. The presentation summarizes the findings of several years of interaction between applied mathematicians and neuroscientists, expert in fMRI, concentrating on probing ICA methods for brain fMRI. This study raised questions, informed by mathematical considerations, that are investigated using numerical simulations and specially designed fMRI experiments. The intent was not to cast doubt on the successes of ICA methods for fMRI data analysis, but rather to understand the elements that determine the methods' success; this led us to a surprising result.
Web link.

Ingrid Daubechies received both her Bachelor's and Ph.D. degrees (in 1975 and 1980) from the Free University in Brussels, Belgium. She held a research position at the Free University until 1987. From 1987 to 1994 she was a member of the technical staff at AT&T Bell Laboratories, during which time she took leaves to spend six months (in 1990) at the University of Michigan, and two years (1991--93) at Rutgers University. She is now at the Mathematics Department and the Program in Applied and Computational Mathematics at Princeton University. Her research interests focus on the mathematical aspects of time-frequency analysis, in particular wavelets, as well as applications.

In 1998 she was elected to be a member of the National Academy of Sciences and a Fellow of the Institute of Electrical and Electronics Engineers. The American Mathematical Society awarded her a Leroy P. Steele prize for exposition in 1994 for her book "Ten Lectures on Wavelets," as well as the 1997 Ruth Lyttle Satter Prize. From 1992 to 1997 she was a fellow of the John D. and Catherine T. MacArthur Foundation. She is a member of the American Academy of Arts and Sciences, the American Mathematical Society, the Mathematical Association of America, the Society for Industrial and Applied Mathematics, and the Institute of Electrical and Electronics Engineers.

March 5, 2010, 11:00
Extreme degeneracies characterize the module identification problem for complex networks
Yves-Alexandre de Montjoye, UCLouvain

Identifying modular structure in complex networks is a fundamental task for understanding the function, dynamics, robustness and evolution of complex biological, technological and social systems. Although widely used in practice, the accuracy of popular identification techniques, such as the one based on optimizing the quantity called modularity, remains poorly characterized. In this presentation, we will show that in addition to the previously observed resolution limit, the modularity function Q exhibits extreme degeneracies, admitting an exponential number of high-modularity but structurally distinct solutions. These results contradict the widely held assumption that the modularity function typically exhibits a clear global optimum, and imply that modules identified via modularity maximization are unlikely to be unique and should be interpreted with extreme caution.

Work in collaboration with Aaron Clauset and Benjamin Good

February 12, 2010, 11:00
Mixed presentations on time-evolving graphs
Arnaud Browet, Vincent Traag and Gautier Krings, UCLouvain

Arnaud Browet:
In the context of computer vision, it is quite popular to analyze an image or video frame as a weighted undirected graph. In this short talk, I will present you the basic concepts of community detection in graphs, designed here to find objects in the input image, based on the modularity optimization with the Louvain's method. I will also present a possible extension for video tracking and some results on either synthetic and real images.

Vincent Traag:
The recently developed slice-modularity can be effectively applied to detect communities in a temporal setting. I will show how it can be implemented in the Louvain method, by 'flattening' the network, also allowing for negative links. To demonstrate the use of the algorithm, I will show the communities in a network of international relationships over the course of the 20th century.

Gautier Krings:
The analysis of time-evolving graphs starts always with a simple manipulation of the data: the dataset is sliced in several time windows that represent consecutive "pictures" of the data. The shape of the graphs generated by those time windows is influenced by the size of the window, and is strongly connected to the timescales that govern the processes underlying the analyzed dataset. Choosing an appropriate window size is thus an important step in the analysis, but this choice is usually neglected by authors. I will present recent advancement on a methodology that allows to uncover the timescales of a dataset, and helps to choose an appropriate window length.

February 2-3-5 and 16-17-19, 2010, 09:15
Graduate course on networks: models, community structure, dynamics
Santo Fortunato (ISI Foundation, Torino, Italy)

GFirst session of the graduate course "Complex Networks: Structure and Dynamics" More information at http://sites.uclouvain.be/socn/SOCN-Courses0910.html#course2

January 26, 2010, 11:00
A scalable technique to compute optimal experimental designs on large networks
Guillaume Sagnol, INRIA (France)

The theory of optimal experimental design explains how to best select experiments in order to estimate a set of parameters. When the number of parameters is large, the problem of finding the optimal amount of experimental effort to spend on each available experiment is computationally very challenging, because the objective function of this optimization problem is a spectral function of a very large matrix. Instead, we propose to solve a series of simpler problems, where one wishes to estimate some randomly chosen linear combination of the parameters, and then to combine the resulting designs. In this talk, we will see that each of these subproblems - which are known as $ c-$ optimal design problems - can be solved very efficiently by Second Order Cone Programming. Our new approach allows us to solve much larger instances, by comparison with the semidefinite programming (SDP) based approaches which are considered to be the state of the art on this problem. Our proof is based on Lagrangian duality techniques, and shows that the SDP formulation of the $ c-$ optimal design problem always has a solution which is a matrix of rank $ 1$ , which explains why the complexity of this problem fades. We will apply our method to the estimation of the traffic on each Origin-Destination (OD) pair of an internet backbone network, by optimizing the set of nodes where measurements should be performed, as well as the sampling rates of the measurements. We will present experimental results relying on real data from a commercial network.

December 17, 2009, 15:00
Urban network and society
Francesco Calabrese, Senseable City Laboratory, MIT

Over the past decade, our built environment has become thoroughly permeated by digital wireless signals. What is new about this blanket of bits is that, unlike the old unidirectional signals of radio or TV, it is bidirectional. The packets transmitted by cellular phones and WiFi-enabled portable devices are tied to patterns of human behavior and, if properly mined, can potentially reveal information about how people move and interact with each other. In this talk, I will present recent research on the analysis of telecommunication network data to better understand dynamics at the city and country level.

December 11, 2009, 11:00
Large eigenvalue problems and spectral clustering
Katrijn Frederix (K.U.Leuven)

When performing spectral clustering [1] on large data sets, one has to look for the eigenvectors corresponding to the largest eigenvalues of a matrix whose size depends on the number of data points. This number can be huge. A possibility to avoid solving this large eigenvalue problem is to approximate the similarity matrix by a low rank matrix and then solve the reduced eigenvalue problem. In this talk we will investigate how we can adaptively look for a suitable rank for this approximation. The study will be illustrated by numerical examples.

[1] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395-416, 2007.

December 04, 2009, 11:00
Similarity Measures in Graphs
Thomas Cason (Large Graphs and Neworks, UCL)

Measures of graph similarity have a broad array of applications, including comparing chemical structures, navigating complex networks like the World Wide Web, and analyzing different kinds of biological data.

Similarity can be measured at global or local level. In the local case one talks about node to node similarity. The latter can be used to compare the local structure of the graph around two nodes or compare the role played by two nodes in the graph.

I will overview various definitions of similarity and in particular the one introduced by Vincent Blondel et al.. We discuss its complexity and consider a less expensive iteration to find a low rank approximation of the similarity matrix.

November 27, 2009
PAI DYSCO day

November 25, 2009, 11:00
Decentralized computation and self-organization on anonymous networks
Julien Hendrickx, Laboratory for Information and Decision Systems, MIT

We propose a model for deterministic distributed function computation by a network of identical and anonymous nodes, with bounded computation and storage capabilities that do not scale with the network size. We exhibit a class of functions that cannot be computed in this model, and prove that every function outside this class can at least be approximated. The problem of computing averages in a distributed manner plays a central role in our development. We then investigate the minimal conditions required for anonymous networks to self-organize by assigning unique identifier and routing addresses to each node. We show that it suffices for each node to be allowed to draw one random bit once.

November 20, 2009, 11:00
Computation of symbolic dynamics of one and two dimensional discrete time systems
Lorenzo Sella (CWI, Amsterdam)

Real valued discrete time dynamical systems frequently exhibit a very complex behaviour which make them difficult to study. A powerful way to analyse these systems is translating them to symbolic systems, which are systems of infinite sequences of symbols over a finite alphabet where the dynamics is given by the shift operator which maps a sequence to a sequence shifted of one position. These kind of systems are easier to analyse and understand than systems over real numbers, in particular it is simpler to compute properties such as topological entropy which give a measure of the level of complexity of the system. In this talk we present algorithms for the computation of symbolic dynamics of piecewise continuous dynamical systems in one dimension and piecewise affine dynamical systems in two dimensions. The algorithms for one dimensional systems are based on covering relations combined with Kneading theory, while the algorithms for the two dimensional rely on the properties of the decomposition of the Conley index. These algorithms also provide computation of the topological entropy of the system. Finally we show how these results can be applied to the computation of the discrete dynamics of a hybrid system via a suitable return map.

November 5, 2009, 11:00
Estimating the joint spectral radius via ergodic theory
Dr Ian Morris (Warwick Mathematics Institute, University of Warwick, UK)

The joint spectral radius of a bounded set of matrices is the maximum possible exponential growth rate of long products of matrices drawn from that set. It has been noticed that periodic products of matrices often appear to give excellent approximations to the joint spectral radius. Using ideas from multiplicative ergodic theory, we prove a strong estimate on the rate of this approximation. If time permits, we may briefly discuss other recent applications of ergodic-theoretic ideas to the theory of the joint spectral radius.

http://www.warwick.ac.uk/staff/Ian.Morris/

October 30, 2009, 11:00
Communities in Networks
Mason Porter (Oxford Centre for Industrial and Applied Mathematics, University of Oxford, UK)

Networks arise pervasively in biology, physics, technology, the social sciences, and myriad other areas. They typically exhibit a complicated mixture of random and structured features. Over the past several years, my collaborators and I have conducted several studies of cohesive mesoscopic structures known as "communities," which consist of groups of nodes that are closely related. In this talk, I will discuss some of our results on network community structure in social, political, financial, and biological networks.

October 21, 2009, 11:00
Blind watchmaker networks and the meta book concept: An overview of recent projects
Sebastian Bernhardsson (Umeå University, Finland)

http://www.tp.umu.se/~sebbeb/

It has been found that many different kinds of systems share a common structure, namely a broad, power-law like, degree distributions. For networks this feature shows up as the number of nodes with a certain number of connections, and for e.g. written texts as the number of words with a certain frequency (Zipf's law). A vast amount of research has been devoted to explain this property, in both cases, and to invent null models used to verify the existence of other structural properties. I will here present a maximum entropy solution for networks called the blind watchmaker network. This scale-free network turns out to be very similar to metabolic networks and can successfully be used as a null model to identify non-random network properties. It has also recently been demonstrated that the frequency distribution of a novel depend on the length of the text. And further more, that the same is true for the average occurrence of a word. A meta book concept will be presented which catches these size dependencies. This concept states that the writings of an author behaves in the same way is if it was a small section pulled out of an abstract infinite meta book. As a consequence the number of words with frequency k will be described by the expression ~1/k, for an infinite book.

October 6, 2009, 14:00
Fast computation on SO(3) of optimal oriented bounding box
Chang Chia-Tche, Gorissen Bastien (UCL)

Photorealistic realtime rendering and physically correct simulations of 3D scenes are at the bleeding edge of what can be done with today's computers. Such applications involve collision detection and visibility tests using methods based on oriented bounding boxes (OBB) enclosing 3D objects. Joseph O'Rourke published in 1985 an algorithm that computes the exact solution in cubic time, but is extremely hard to implement. In practice, faster PCA-based and brute force heuristics are used. The computation of the minimal-volume OBB is formulated as an unconstrained optimization problem on the rotation group SO(3). It is solved using a hybrid method combining the genetic and Nelder-Mead algorithms. This method is analyzed then compared to the current state-of-the-art, and it turns out to be either faster or more accurate.

September 29, 2009, 11:00
Weighted networks with community effects. A case in macroeconomy
Francisco Redelico (Catholic University of Argentina, Argentina)

It has been said that economy is one of the most complex systems, because the number of constituent agents and their nonlinear relationship. For that reason, complex networks would be a key tool for its analysis. In this talk I will present an analysis based on weighted complex network for the yearly evolution of the Gross Domestic Product for both "in developing" and "developed" countries, looking for structural differences between the two groups.

September 15, 2009, 11:00
Communication sequences and temporal correlations in a mobile communication network - first results
Jari Saramaki (Helsinki University of Technology, Finland)

Social networks, like many other natural networks, are inherently dynamic entities. While "traditional" network analysis has focussed on features of static snapshots representing the network at a given point in time, social networks involve dynamics on multiple time scales, ranging from short time scales of interaction events and their sequences to long scales of network growth and reorganization. Here, the target is to understand dynamics of communication events (the short time scale) in human mobile telephone communication activity, and ultimately link the observations to structural and dynamic features of the communication network arising on longer time scales. I will discuss "causal" sequences, where events appear to be triggered by earlier events, and the observation of dynamic motifs in this data. The consequences of such temporal correlations have also been investigated using sociodynamic models as "probes" of the data, with the outcome that compared to uncorrelated reference, the actual communication sequences appear to result in slower dynamics.

Edit - History - Print - Recent Changes - Search
Page last modified on February 03, 2011, at 09:51 AM