WP2: ESTIMATION AND MODELING (UCL, KUL, UGent, VUB, ULg, UMons, UNamur)

Introduction and state of the art

The area of system identification is at the basis of system and control theory and it is an unavoidable tool for almost every engineering application involving dynamical systems. While the area is reasonably well established for standard linear time invariant systems, a lot of research is still going on to make this an efficient and effective tool in a completely general time-varying or nonlinear context and under the presence of noise. We can now design periodic excitation signals that allow to detect and to quantify the time-variations in instantaneous frequency response function measurements from a single experiment. However, in several applications there is a need for quantifying also the level of the nonlinear distortions in such measurements. One needs to go beyond the best linear approximation and incorporate nonparametric techniques to deliver a high quality estimate of the input-output frequency response function. Other approaches suggest to use polynomial optimization problems and symbolic calculations to address this problem but a lot of work is still needed to obtain efficient numerical implementations. For nonlinear systems, a major drawback is the explosion of the number of parameters for a growing complexity of the model, which is why one often considers only simple structures (Wiener, Hammerstein, nonlinear ARX).

The design of systems for fault detection and isolation (FDI) is well developed for linear systems and certain classes of nonlinear systems. However, much less work has been done on model-based FDI for distributed parameter systems, and decentralized FDI is just starting to be addressed. One also needs to develop systematic and efficient approaches for tuning the design parameters of the FDI systems for achieving a given performance in terms of statistical properties like false and missed alarm probabilities, in the presence of modeling uncertainties. Besides, Moving Horizon Estimation promises to become a powerful tool for the online identification of parameterized system models as well as for fault detection and diagnosis, but it requires the embedded solution of convex or even non-convex optimization problems, and the online algorithms need to be reliable and fast.

Taking into account the manifold or tensor structure of data has become a common theme in machine learning and many other domains in the last few years. In kernel-based modeling for unsupervised learning tuning parameter selection is a key aspect due to the lack of target values and clear insights of the underlying process. The parameters are typically selected using heuristics but prior knowledge is often ignored leading to a non-optimal performance. Techniques based on predictive and prognostic models and on Bayesian networks have been developed but the complexity and lack of interpretability of the models is still a large barrier for its use in many domains.

In experiment design for system identification a change of paradigm has taken place recently. Instead of minimizing the uncertainty under constraints on the excitation signals, attention has turned to minimizing the cost of the identification in order to achieve a prescribed uncertainty or a prescribed level of performance. In goal-oriented identification, the quantity of interest is not the uncertainty of the model but the performance degradation of the application that results from that uncertainty. In the special case of small data sets classical statistics can no longer be used and reconstructing reliable uncertainty bounds with a given confidence level is a real challenge. Other challenging areas are the modeling of diffusion systems, fractional systems and multi-body systems.

Challenging engineering applications where the state of the art is moving fast are the areas of routing protocols in complex networks and diagnosis of biomedical data. Examples of this are Functional Magnetic Resonance Imaging, which is currently the diagnostic method of choice to study and visualize the activity of the human brain. In vivo Magnetic Resonance Spectroscopy is a more specialized technique known to bring additional and essential metabolic information.

Bioprocesses consist of in vitro micro-organisms cultures performed in bioreactors. For pharmaceutical applications, the micro-organisms are often genetically modified to express products of interest. Due to their complexity and non-linear behavior, mathematical modeling of bioprocesses represents a challenging task, which involves the use of various techniques from statistical analysis and system identification in order to perform process optimization and quality control.

Macroscopic models of bioprocesses are simplified models that only consider the main macroscopic components of interest. The systematic derivation of such models from limited sets of experimental data is however still an open problem. There is also an increasing need for modeling some specific parts of biological processes, e.g. the induction phase which pilots the production of some recombinant proteins by genetically modified cells, which play crucial roles regarding the product quality and the process yields. At the mesoscopic and microscopic level, one can use metabolic pathway analysis and elementary flux modes for the design of optimal fermentation operation strategies promoting fluxes towards maximum product formation, as well as for the design of consistent macroscopic bio-reaction. Such mathematical models of biological phenomena are increasingly available at the microscopic level. Metabolic flux analysis is one of the most famous approaches. Nowadays, dynamic data become available even at the level of the gene. Determining the appropriate connectivity of these networks and the appropriate dynamical models for the gene expression are still important challenges.

Research objectives and proposed topics
WP2.1 Identification of linear systems (UCL, VUB, KUL, UGent)

Identification of time-varying systems. Following the lines of the best linear approximation of a time- invariant nonlinear system we will first make an in depth theoretical study of the properties of the best linear time-varying approximation of a class of time-varying nonlinear systems. Next, we will design periodic excitation signals such that we can measure simultaneously the instantaneous Frequency Response Function, its time-variation, the noise level, and the level of the nonlinear distortions. This gives very useful insight for the final parametric modeling step from noisy input-output data (time-varying differential/difference equations, state space equations, and parallel block structures). The identification of Linear Parameter Varying models, where the coefficients are unknown linear dynamic functions of the scheduling parameter(s), from noisy input-output data is still an open problem. Especially the model selection problem (dynamic order of the system and of each coefficient) is challenging.

Going beyond the Best Linear Approximation (BLA). The goal of this project is to introduce BLA-like models for these nonlinear dynamic systems that are not dominantly linear and time-invariant. In essence the BLA consists of an ideal part (the linear time-invariant part for the BLA) and a perturbation (the noise source for the BLA). The challenge for each type of system lies in the determination of the ideal dynamic behavior, the extraction of this model and the proof of the properties of this approximation.

Nonparametric estimation of Frequency Response Function (FRF). All methods for FRF estimation depend on some design parameters that need to be tuned, and this tuning essentially depends on bias- variance trade-offs. Our objective is to understand these bias-variance trade-offs and, on the basis of this, to propose some possibly iterative methods in which the design parameters are based on estimates of the bias terms obtained in the previous iterations. More generally, we plan to examine the effect that the error on an estimated quantity produces on the bias and variance of a related quantity that is a function of the first one, as well as ways to produce bias-reduced estimates in such situations.

From polynomial system solving to linear algebra and systems theory. We aim at translating the many results and symbolic algorithms from algebraic geometry into a linear algebra framework. Essentially, this approach linearizes the problem at hand by separating the coefficients in a tableau matrix, and the monomials in a vector. The numerical tool set that we use are algorithms such as the QR, the cosine-sine (CS), and the singular value decomposition (SVD). From systems theory we make use of realization theory and filtering techniques.
This research project aims at developing a numerical linear algebra based tool set to efficiently and robustly find all roots of a set of multivariate polynomials. We develop ideas on three complementary levels: 1) Geometric linear algebra, which deals with column and row vector spaces, dimensions, orthogonality, nullspaces, eigenvalue problems, etc. 2) Numerical linear algebra, dealing with tools like Gram-Schmidt orthogonalization, ranks, angles between subspaces, etc. 3) Numerical algorithms, such as SVD, QR, implementing the linear algebra tools in efficient and numerically robust methods. Here we also exploit matrix structure (e.g., Toeplitz structure and sparsity), we investigate variations of iterative methods (e.g., power methods) and try to speed up convergence (e.g., FFT techniques). A multitude of relevant problems in applied mathematics motivate this research, such as prediction error methods system identification, structured total least squares problems, Bayesian networks, algebraic statistics, tensor algebra, and many others.

WP2.2 Identification and modeling of nonlinear systems (VUB, KUL)

Data driven model structure detection for nonlinear systems. The goal of this project is to reduce the number of parameters by detecting the underlying structure of the system starting from the unstructured state space representation. Using well designed optimization methods, we will impose sparsity on the model parameters so that eventually a more structured model with less parameters is retrieved. The parameter reduction will be made in a series of successive steps: 1) reduce the number of states that get a nonlinear input; 2) reduce the number of states that contribute to the remaining nonlinear inputs; 3) retrieve uncoupled states to decouple the state equations. This result can be considered as a generalization of the classical block structured methods.

Identification of parallel block-oriented models. In this project we study the identification of parallel block-oriented models. It is well known that these can be used as universal approximators of a wide class of nonlinear systems, e.g. the parallel Wiener (or parallel Wiener-Hammerstein) can uniformly approximate the class of Volterra systems. On the one hand we will use a generalized and optimised version of the Schetzen-Wiener approach using dynamic orthogonal basis functions based on the best linear approximation of the nonlinear system. On the other hand we will develop a method that identifies a parallel Wiener, Hammerstein, and Wiener-Hammerstein structure. In that case we will develop an initialization method based on the best linear approximation that is measured under different operating conditions.

Modeling of Open Dynamical Systems. Some of the main features of engineering systems are (i) that they are dynamic, meaning that the time evolution of the variables plays a central role, and (ii) their interconnectivity. This interconnectivity implies that the subsystems must be open, that is, that the components are influenced by their environment. The aim of the research in this project is to develop a broad mathematical framework aimed at obtaining models for interconnected systems. Central in this modeling methodology is the notion of the behavior of the system. This notion generalizes the classical input-output framework in a direction that is much better adapted to real physical systems. One class of systems that are of interest are mechanical systems, with particular emphasis on modeling the energy flow between subcomponents. Another class are chemical systems where incorporating chemical reactions and potentials poses a system-theoretic challenge. Till now this work has been concerned with deterministic systems. Of course, useful and flexible concepts require allowing for uncertainty in the models. Recent work has initiated a generalization of the behavioral approach to stochastic systems. The technical concept that makes this possible is the notion of complementary sigma-algebras and touches on the very foundations of probability theory.

Nonlinear system identification using kernel-based models. Next to aiming at more general model structures, we will study different ways for including prior information into black box models. One avenue here is the integration of knowledge about a system in multiple operating points, as present in many industrial applications, into a single nonlinear model. Another possibility arises in the combination of a- priori known model structures given by algebraic and/or differential equations with measured data. Sparsity inducing norms, especially group and matrix norms, will be used at this point. Many initially nonconvex problems can be cast in such a way that all non-convexity is aggregated within a rank constraint on some variables. Then, powerful convex relaxations can be obtained by replacing the rank constraint by its best convex approximation in the nuclear norm. For applications in traffic networks partially linear modeling will be studied by combining least squares support vector machines and sparse linear regression for estimating travel times.

WP2.3 Fault detection and diagnosis (UMons, KUL)

Model-based fault diagnosis. We will address the development of design tools for distributed change detection/isolation systems, but also the development of analysis tools that allow one to evaluate the effect of network imperfections (like packet dropouts and delays) on the FDI performance. The development of a methodology for fault diagnosis in distributed parameter systems will be continued, namely the detection and isolation of faults in a moving bed separation process (aging of the separation columns, pump failures, sensor drift, etc.).

Moving Horizon Estimation (MHE). Based upon previous work on real-time algorithms, we will address the development of fast MHE algorithms that exploit the structures arising from non-quadratic penalties (L1, Huber) and large-scale coupled systems, exploiting parallel hardware, and to use the algorithms for online estimation and diagnosis of the bio-chemical and mechatronic applications within this proposal.

WP2.4 Machine learning (KUL, ULg, UGent)

Tensor-based techniques for manifold learning. Whereas most of the existing tensor techniques are non-convex, in our recent work we have shown that convex methods can also be derived. The existing convex techniques work in connection with the Tucker decomposition and use matricization as a main tool. An important topic for future research deals with extending this approach for instance by deriving proximity mappings specifically designed for tensors.

Unsupervised and semi-supervised learning with kernel-based methods. New schemes for prior knowledge incorporation will be studied by incorporating additional constraints within kernel spectral clustering. For complex networks applications the use of kernel spectral clustering will be studied with new formulations for analysing and predicting evolving communities over-time.

Large-scale and parallel machine learning. The objective of this project is to develop original machine learning algorithms for solving very large-scale problems by exploiting parallel computing architectures. The main drive of our research will be to develop these algorithms ex nihilo directly with the large-scale and parallel objectives in mind and not necessarily to derive these algorithms as a parallelization or efficient implementation of existing serial algorithms. Our objectives will also include the practical implementation of these algorithms and their applications to massive biological datasets. We will target the design of novel supervised learning algorithms to handle very high dimensional input- and output- spaces and combine recent advances in statistical learning, optimization, and information theory.

Interpretable models. We will study a novel approach that constrains the models towards interpretable scoring systems using suitable basis functions, iteratively reweighted L1 regularization and imposing sparseness. A framework for different model formulations will be formulated towards binary and multi- class classification, survival analysis and extensions for high-dimensional data. This research will incorporate aspects of regularization, feature selection, model sparsity, kernel-based learning and optimization.

Longitudinal data analysis: a kernel-based approach for classification. Nowadays, technological developments allow to repeatedly monitor patients over time. For example, MR(S)I data can be used for the follow-up of brain tumor patients. Recently, relations between the traditional longitudinal data fitting procedures and kernel machine regression have been reported. A major challenge remains to develop kernel-based tools for accurate classification of longitudinal profiles, since the longitudinal profiles often belong to different groups (e.g. (non-)responders in the context of brain tumors).

Credal networks and imprecise probability trees. When the uncertainty models are more general one has imprecise probability models (leading to so-called credal networks, allowing for more robust modeling). Very little is known about their behavior, and the resulting inferences. Efficient exact algorithms for inference and optimization in credal networks will be studied. For imprecise probability trees basic concepts and techniques for efficiently dealing with such inferences will be investigated, allowing for robustified probabilistic inference in stochastic processes.

WP2.5 Experiment design and goal-oriented identification (UCL, KUL, VUB, UGent, ULg, UNamur)

Experiment design and goal-oriented identification. A major challenge is to translate constraints on the required accuracy of the model application into constraints on the class of admissible spectra of the excitation signals. A first open problem, therefore, is to parameterize this class of admissible signal spectra and to understand the relation between these parameterizations, the moments they generate, the corresponding information matrices, and the ensuing variance of the quantity of interest. Since the optimal experiment can only be computed from a prior estimate of the unknown model, another open problem is to study the sensitivity of the optimal solution with respect to errors on this prior estimate.

Identification for control. We are studying optimal solutions by looking at the set of admissible controller-external inputs that can be parameterized without any approximation by a finite set of matrix- valued trigonometric moments, and by casting the optimal design as a semi-definite program that is linear in these moments.

Small data sets. Since the asymptotic theory is no longer applicable we will explore a totally different approach. Instead of delivering point estimates of the model parameters and the corresponding covariance, we will rather estimate parameter sets (intervals) and quantify the confidence level of these sets. The latter can e.g. be obtained via contour lines of the maximum likelihood cost function. Constructing minimum volume parameter sets with high confidence level will be the main difficulty to be solved.

Measurement-based meta-modeling framework. The extraction of simple, scalable models for complex structures is nowadays fed by PDE simulations of the system response. We will create a measurement based meta-modeling framework, where the meta-model structure can be selected such as to be statistically meaningful. For many systems, the measurement process is fast compared to the PDE simulation time. Obtaining a larger amount of data is therefore easier, but the data come with noise perturbations and limitations of the measurement setup.

Reaction-diffusion equations: the role of geometry and granularity. Reaction-diffusion equations are universal tools used to model a broad range of complex phenomena in very different domains, such as economics, physics, and biology. Our work focuses on unravelling the mechanisms responsible for the selection and formation of spatio-temporal patterns in biologically inspired models. Particular emphasis is placed on understanding the role of the underlying geometry, the finite size effect due to the finite number of molecules as well as possible bifurcations in pattern development.

Identification of fractional systems and extension to systems including advection. The research objectives are twofold: First, novel non-parametric techniques will be developed to cope with arbitrary excitation, prior to modeling in the s-domain, and to deal with the long transient responses of fractional systems. Second, the extension towards diffusion-advection systems will be addressed. The latter has applications in thermal problems that contain not only diffusion phenomena, but also mass transport (e.g. geothermal heat diffusion in the presence of ground water transportation).

Model reduction for flexible multi-body simulation of mechanical and mechatronic systems. The goal of the proposed research program is to develop novel system-level as well as component-level model reduction techniques for time-varying topology systems. The new methods will be accompanied with real-life engineering validations from the industrial machinery and transportation sector, including the analysis of the drivetrain dynamics of modern multi-megawatt wind turbines, the analysis of the broadband contact dynamics of modern gear transmissions and bearings, the mechatronic design and analysis of modular production machinery and the vehicle dynamics challenge of the optimal mechatronic design of active safety measures in their interaction with body flexibility.

Modeling for efficient broadband structural dynamic and vibro-acoustic simulation. The starting point will be an innovative wave based modeling method, which we have developed recently and which is a non-element based numerical analysis concept that has proven to converge much faster than the conventional element based methods.

Identification of the influence of geometry variations on the capacity of xDSL lines. Nowadays, the copper telephone network is pushed to its limits to offer continuously increasing data rates and new services. The latest xDSL developments aim at using both twisted pairs to offer the customer a higher throughput, possibly by exploiting the common mode signal as well and in coordinating the twisted pairs of different customers to reduce the crosstalk between the different services within a binder. This research project will investigate the influence of geometry variations on multiconductor transmission line models, with special focus on xDSL performance.

Modeling of animal locomotion using Markov chain models. Locomotor activity of laboratory rodents is the most visible part of their behavior. It can be easily and profoundly modified by many factors such as drugs or a variety of learning tasks. Modifications of the behavior via drugs or due to training have an impact on the way the animals move and on their neuronal activity. These can be modeled using Markov chain models where each state describes a particular type of movement or of neuronal firing pattern. The modeling will allow for a quantitative analysis of animal locomotion, and is a step towards a deeper understanding of goal-directed behavior, learning, and spatial navigation.

Modeling under Rician noise assumptions and robust automated MRS data processing. The aim of this project is to model the brain dynamics present in fMRI signals under Rician noise assumptions, in order to develop a more accurate fMRI analysis. We propose to develop robust and automated MRS data processing for 2D/3D multi-voxel MRS measurements, based on semi-supervised quality assessment and spatial prior knowledge. Additionally, supervised and unsupervised tissue classification for high resolution multi-slice MR images will be tackled by data fusion of several MR imaging modalities acquired during the same measurement session, including MRS.

Neurodynamics models. A novel reduced model of bursting neurons will be developed and analyzed with the aim of unfolding the firing mechanisms of particular types of neurons (e.g. dopaminergic and serotonic) and the role of particular small-conductance ionic channels in symptoms associated to Parkinson’s disease.

WP2.6 Monitoring of (bio)chemical reactors (UMons, UCL, KUL)

Modeling of bioprocesses. Problems that will be addressed are the design of informative experiments, the decomposition of the identification problem in well-posed subproblems, and the validation of interpretable models. At the microscopic level, the sound exploitation of the detailed information with respect to objectives related to dynamic modeling, optimization and control is a challenge.
Process monitoring and control is essential for product quality and process productivity, and is becoming an integral part of the recommendations of the food and drug administration. However, monitoring and control is made difficult by the lack of reliable dynamic process models and by the scarcity of on-line measurements of metabolite concentrations. These variables can be reconstructed on-line on the basis of coarse-scale mechanistic models of the (bio)chemical process and some available and, ideally, cheap hardware measurements. Robust nonlinear estimation techniques, such as interval observers, sliding mode observers and particle filtering, are the methods of choice for tackling these problems. Based on macroscopic dynamic models, robust state estimation techniques will be studied to reconstruct on-line non-measured state variables, and in some cases, unknown inputs affecting the system. A practical goal is to estimate key state variables for which commercial probes do not exist, are too expensive or are out of measurement range. Particular applications will be considered in this respect including (a) the fed-batch cultures of bacteria and the difficult determination of substrate (glucose) and byproduct (acetate) concentrations (b) the continuous cultures of micro-algae and the determination of intracellular concentrations (cell quotas) (c) the cultures of animal cells in perfusion and the determination of key metabolic components.

Particular attention will be paid to nonlinear state estimation techniques, including interval observers and stochastic approaches such as particle filtering, unscented Kalman filtering and ensemble Kalman filtering. Finite-time state estimation techniques and unknown input estimation will also be considered. In connection with distributed parameter systems, we focus on state estimation and control of Simulated Moving Bed (SMB) chromatographic separation processes, which are increasingly used in the pharmaceutical sector. This work involves the selection of appropriate model structures, experiment design, parameter estimation (with identifiability analysis), nonlinear state observer design (based for instance on the wave theory), and fault detection (aging of the separation columns, pump failures, temperature perturbation).
New efforts shall be dedicated to the development of new estimation algorithms for reaction systems, with a particular emphasis on the development and analysis of finite time converging schemes. In this context, new developments are expected with respect to the sliding mode and super twister observers.

Bioprocess monitoring. The aim is to propose robust state estimation algorithms capable of dealing with model uncertainties and especially with the ones concerning the kinetic part of the model. The combination of hybrid observers and robust quasi-LPV approaches will be studied. Software sensors will be designed based on reliable and easily available measured signals. The aim is to estimate different kinds of output signals (concentration of biomass, metabolites like ethanol or acetate, etc.) on the basis of physical measurements (e.g. exhaust gas analysis) and/or some input signals used for control (base feeding, stirring rate, etc.). Both black box and first principle models should be tested to establish the links between these available signals and the ones to be estimated. The information brought by digital holographic microscopy measurements concerns not only cell counting but also quantitative information about cell morphology. Based on mathematical models, this information will be linked to the cell physiology which could be very helpful to monitor on-line the state of the cells.

WP2.7 Modeling and identification of biochemical processes at macroscopic level (UMons, KUL, UCL, ULg)

Global sensitivity analysis and optimal experiment design. The influence of the measurement errors on the model structure determination will be studied, as well as the use of statistical hypothesis tests for the selection of appropriate kinetic laws among nested structures. Global sensitivity analysis allows computing the first-order effects of the parameters on the measured outputs as well as the interaction between them. This technique is being increasingly applied to the analysis of larger biological models and the practical assessment of parameter identifiability. Our aim is to exploit these techniques for optimal experiment design. As a specific target application, we will focus on the modeling of cultures of micro-algae and cyanobacteria in the chemostat, which have many potentially useful purposes such as water treatment, and the production of food, cosmetics, pigments and biofuels. The modeling challenge in this area is to establish the influence of various factors such as the time evolution of light intensity, nutrient concentrations, and to describe the production of valuables components.

Methods for dealing with sparse data. Although mathematical models are well known for describing biomass growth in yeast cultures like S. cerevisiae, they almost never take into account the most important properties required in the food industry: the activity and the stability. Modeling the time evolution of variables representative of these phenomena and the impact of some stresses is therefore of primary importance for optimization and control of this kind of processes.
Mathematical models reproducing the time evolution not only of the biomass growth but also of the protein production as a function of the substrates and inductor feedings are required to optimize the productivity of recombinant protein production with microorganism cultures.
When developing macroscopic models for animal cell cultures, it remains more difficult to reproduce accurately this type of biological cultures than microorganism cultures (bacteria, yeasts). Some phenomena, such as the possible overflow metabolism, should be taken into account.
We also need to develop systematic macroscopic modeling methodologies capable to deal with sparse data. The aim there is to solve the problem of reaction scheme determination and stoichiometry and kinetics identification with highly flexible model structures sufficiently complex to reproduce the behavior of different types of biological cultures but sufficiently simple to lead to accurate parameter estimation despite the data sparsity.

Applications to plant growth, wine fermentation, microbial ecology. Research activities will concentrate on the development and analysis in the following fields of application: plant growth, wine fermentation, microbial ecology. For plant growth, the objective will consist of modeling the complex plant behavior by accounting for the limited available data in a way that will provide reliable models. Wine fermentation serves as a benchmark for batch processes, in particular in the context of food processes, to develop models that emphasize the different metabolic states over the course of the fermentation with the objective to better control the production of key flavour markers.

WP2.8 Modeling and identification of biochemical processes at mesoscopic and microscopic levels (UMons, UCL, KUL, UNamur)

Metabolic modeling. Our objective is to extend our previous results in the area in order to deal with a detailed metabolic analysis of overflow mechanisms and diauxic growth in various yeast and bacterial fermentations, and to deal with detailed metabolic analysis of animal cell cultures.

Modeling of gene expression networks. Most microscopic modeling in the framework of bioprocess optimization has been focused on metabolic pathway analyses. However, the primary internal regulator of a cell's behavior is its gene expression network, which controls the expression levels of the various proteins, defines the appropriate response to be given to any external stimulus, and is intimately intertwined with the metabolic network. Our objective is to model the evolution of gene expression networks under changes in environment, external perturbations or along different development stages on the basis of all the available data (obtained e.g. from DNA microarray time series or high-throughput RNA sequencing).

Modeling of gene switching. Gene switching occurs naturally in living systems as a response to external perturbations, changes in environment or the development of the host organism. It can also be engineered and used in view of controlling the expression of specific genes for biotechnological or medical purposes. Deterministic models of gene switching are generally sufficient for describing the behavior of average concentration levels of mRNA and protein. However, they are incapable of taking into account effects caused by the internal fluctuations of particle numbers (internal noise). Different types of switches, such as positively auto-regulated genetic switches, will be analyzed using stochastic modeling.

Biomolecular modeling. The complex interactions that take place between the biomolecules in any living cell are primarily determined by the properties of the biomolecules themselves. Therefore, to modify these interactions in a controlled way, one needs to modify these biomolecules. With this in mind, we previously developed a knowledge-based approach to predict mutations of rationally modified thermodynamic stability. We will extend this approach to also predict mutations of modified thermal stability and solubility, in view of having a general tool for rational protein design, which guides the selection of mutations likely to have the desired properties.

Multiscale approach. The aim is to build bridges between the molecular, gene, metabolic, cellular and macroscopic levels in order to obtain robust and reliable models, which could be used for (bio)chemical process optimization. One prototypical application is the simulation of chemical reactions on catalytic surfaces with slow diffusion. In that setting, macroscopic mean-field equations are not valid since reactions can only occur when the right combination of reactants are present on a few neighbouring lattice sites. Hence, additional variables, such as pair correlations, need to be taken into account, and efficient, adaptive computational multiscale algorithms (see WP1.5) would have significant impact.

Modeling of cell evolution. In the biomathematics framework we are interested in modeling selected features of simplified forms of cells and to understand the key ingredients of their evolution under the pressure of the environment. We develop an expertise ranging from deterministic models to stochastic ones, particularly suitable in the case of small numbers of involved chemicals and thus where the finite size of the system could play an important role. Using particularly suitable techniques, notably the Van Kampen expansion, we are able to make a bridge among these two limits and thus to study analytically the fluctuations in the systems.

Useful links : UCL | KUL | UGent | VUB | ULg | UMons | UNamur | Stanford | Princeton | MIT