Current Students

The following are postgraduate students currently under my supervision or co-supervision.

Project Students

Computer Science Honours

Student Project Title Abstract

Engineering Skripsies

Student Project Title Abstract
Donovan EdelingDonovan Edeling
*Swarm intelligence techniques that mimic the foraging behaviour of a swarm birds have been successful in various static optimisation problems. Many real-world optimisation problems, however, are dynamic, and optimisation methods are needed to be capable of continuously adapting the solution in these changing environments. The main aim of this project is to effectively develop and evaluate a new approach to approximate polynomials in dynamic environments, using Set-based Particle Swarm Optimisation (SBPSO). Polynomial approximation has been successful in static environments, where fixed data sets are used. The project will modify current techniques to address the problems experienced when using static approximation in dynamic environments. The use of quantum particles will be investigated to ensure diversity is maintained during the course of an algorithm run. The performance of the algorithm will be compared to current available approaches.
Charl HerbstCharl Herbst
Neural Networks with Adaptive Activation Functions Adaptive activation functions are activation functions that have been modified by introducing a trainable parameter in the function. The training of a neural network with the additional parameter results in a dynamic optimization problem which can be optimized to improve the performance of a neural network as it dynamically changes the topology of the loss function. The gradient and orientation of the learned decision boundary can more accurately approximate the gradient of the true decision boundary by changing the parameter of the adaptive activation function. This will result in a reduction of misclassifications and improve the generalization capability of the neural network. This project will develop a neural network with adaptive activation functions (NNAAFs) using a standard PSO algorithm. A fitness landscape analysis will be conducted to investigate how the parameter(s) influence the characteristics of the neural network error landscape. The neural network training statistics of the NNAAF model will be compared to a model that has been trained with static activation functions. As a result of the dynamic formulation of the problem, a PSO algorithm developed for dynamic environments such as the Quantum PSO algorithm will also be implemented. Then the performance of the QPSO algorithm will be tested under the presence of concept drift.
ReynardMarxReynard Marx
Prediction and Analysis of Exoplanet Features Using Self-Organizing Maps The aim of this project is to train a self-organizing map on exoplanet data (SOM). The SOM will then be used to identify clusters of similar exoplanets, predict values for features in the dataset, and impute missing values. Exoplanet datasets tend to be sparsely populated due to limitations in measurement techniques. SOMs tend to perform well even on sparsely populate datasets, hence the utilisation of SOMs in this research. Research on exoplanets is currently a very exciting area, mainly because of the potential of finding habitable worlds. The use of a SOM to cluster exoplanets together will allow us to identify which exoplanets are most like Earth. This can be one indication that these exponents must be investigated further for signs of habitability.
RossNaylerRoss Nayler
Incremental Feature Learning The aim of this project is to develop a dynamic particle swarm optimization (PSO) algorithm for incremental feature learning (IFL) that can be used to train a neural network (NN). IFL refers to problems where the number of descriptive features increases over time. IFL can also refer to incrementally adding features from a current set of available features. As new features are added to the model, the search landscape dimensionality will change, thus IFL is essentially a dynamic optimization problem (DOP). Feature importance will be ranked using the boruta feature selection (BFS) algorithm. Many current machine learning approaches come with a substantial computational cost, and the goal of this project is to produce an accurate IFL model that has a lower computational cost. The performance of the model will be compared to the performance of a model that has been trained on all available features.
RobertPowerRobert Power
Dynamic Radial Basis Function Neural Networks A radial basis function neural network (RBFNN) is a neural network (NN) where the processing units in the hidden layer’s activation function is a radial basis function. The use of such an activation function provides the NN the ability to generate multivariate non-linear mapping. The purpose of this thesis is to resolve a common issue associated with the construction of RBFNNs that is selecting the optimal number of hidden units in the hidden layer. An investigation will be conducted whether the use of a particle swarm optimization (PSO) algorithm can be used to dynamically adjust the number of hidden units, the means and standard deviations of these hidden units to adjust the kernels of the hidden unit, and to adjust the hidden-output weights. If this investigation confirms this notion, the resultant NN can be classified as a dynamic RBFNN. An empirical comparison will then be conducted between this dynamic RBFNN with other concept drift robust training approaches and algorithms.
WekaSteynWeka Steyn
Improved Multi-Guide particle swarm optimization for Many-Objective optimization The thesis serves to improve the scalability of the MGPSO algorithm by implementing previous research from other Many-objective optimization algorithms such as KnEA and NSGA-II, without drastically increasing the computational complexity of the algorithm. Different archive update strategies are explored, and potentially implementing secondary convergence mechanisms such as knee points. Utilizing sub-objective dominance as an additional criterion for solution dominance due to the high number of non-dominated solutions present with increasing objectives is also considered. The results are compared to that of the standard MGPSO algorithm and other state of the art Many-objective optimization algorithms.

Masters Students

Computer Science

Student Thesis Title Abstract
Chelsea BarraballChelsea Barraball
Competitive Coevolutionary Particle Swarm Optimization for Dynamic Optimization Problems This research will develop a competitive coevolutionary particle swarm optimization approach to solve dynamic optimization problems. Competitive coevolution models that arms race that is observed between populations of species that are in competition for survival. Such competing species are, for example, predator-prey behaviors. Due to the dynamic nature of this process in nature, it is believed that the algorithmic models of such predator-prey behaviors will lend themselve naturally towards solving dynamic optimization problems. The research will start by first developing a competitive coevolutionary particle swarm algorithm for solving static optimization problems and to investigate the impact on swarm diversity. Different approaches to computation of the relative fitness function, and selection of the competition pool will be evaluated. The approach will then be applied and evaluated on various types of dynamic optimization problems.
Heinrich CilliersHeinrich Cilliers
Adaptive Gaussian Mixture Models A Gaussian mixture model (GMM) is used in unsupervised learning to represent clusters in a dataset as a mixture of Gaussian distributions. GMMs are usually fitted using the Expectation-Maximization (EM) algorithm, which is prone to yielding sub-optimal solutions. Additionally, the EM algorithm fits GMMs to stationary data and requires the number of clusters to be specified beforehand. This study aims to propose, evaluate and compare various approaches to fitting a GMM to stationary and non-stationary data, as well as dynamically determining the optimal number of Gaussians using particle swarm optimization.
Kyle Erwin
Kyle Erwin
Alan Gray
Set-based Particle Swarm Optimization for Portfolio Optimization Portfolio optimization is a complex real-world problem where assets are selected such that profits are maximized while the risk is simultaneously minimized. Traditional portfolio optimization approaches make use of quadratic programming to determine portfolios that represent a balance between return and risk. However, as the number of assets increases, the efficiency of quadratic programming deteriorates. In recent years, nature-inspired algorithms have become a popular choice for efficiently identifying optimal portfolios. This research develops such an algorithm that, unlike previous algorithms, uses a set-based approach to reduce the dimensionality of the problem as well as determine the appropriate budget allocation for each asset. The set-based particle swarm optimization algorithm is extended to solve multi-objective and constrained formulation of the portfolio optimization problem using set-based representations.
Jordan DaubinetJordand Daubinet
Multi-Agent Reinforcement Learning for Financial Trading Financial trading is an activity undertaken by a "financial trader', in which the trader buys and sells financial assets from a trading venue with the goal of making a profit between security exchanges. Reinforcement learning is a machine learning algorithm that is trained to learn the optimal actions to take for a specific environment state. The machine learning algorithm learns from experiences, using positive and negative reinforcement based on the outcome of its actions. The recent performance improvements in modern reinforcement learning algorithms have brought about new opportunities for implementing these algorithms within the financial trading space. This research will implement a multi-agent reinforcement learning algorithm to act as an artificial financial trader with the objective of making a profit over a set time frame. Each individual agent will be fit on a unique data type, selected from either technical ticker information, stock fundamental information, sentiment values on social media or some other form of alternative data. Each trained agent’s action space will be used as an input into a last layer reinforcement learning algorithm that decides whether to make a trade or not.
Ignazio FerreiraIgnazio Ferreira Neural Network Ensembles and Concept Drift This research developes an approach to train a neural network ensemble under the presence of concept drift. Particl swarm optimization algorithms developed for solving dynamic optimization problems will be used to train each member of the ensemble and to adapt learned decision boundaries as concept drift is experienced. A multi-modal particle swarm optimization algorithm will be developed to ensure that ensemble members are situated on different local minima of the neural network landscape. Different mechanisms to ensure diversity in enesmble member decision making will also be investigated.
Ryan LangRyan Lang Landscape-aware Hyper-heuristics A hyper-heuristic employs a heuristic pool consisting of a wide variety of different heuristics. A heuristic selection operator is then used to guide the search to the optimal heuristic(s) to use. Fitness landscape analysis is a formal approach to characterize search landscapes. The purpose of this research project is to attempt to find a mapping between algorithm performance and the characteristics of optimization problems, in order to determine for which characteristics certain algorithms perform well, or poorly. From this, the first landscape-aware hyper-heuristic selection rules will be developed.
Muhammed RahmanMuhammed Rahman
Genetic Programming to Induce Classification Trees in Dynamic Environments Genetic programming has been used scucessfully to evolve classification trees for stationary data. This research will developed genetic programming approaches to evolve classification tress for non-stationary data, where concept drift occurs. In addition, approaches will be developed to include dynamically changing boundaries that are parallel and non-parallel to the axes. The set operator will also be included in the genetic programming language. As part of the study, approaches will be developed to quantify the diversity of the tree-based individuals found in genetic programming populations.
Benjamin StrelitzBenjamin Strelitz
A Dynamic Multi-Modal Particle Swarm Optimization Algorithm for Dynamically Constrained Optimization Problems Multi-modal optimization (MMO) PSOs exist for static, unconstrained environments. Additionally, there exist many PSO algorithms for solving statically constrained problems. The latter PSOs return only single solutions. There are also PSO algorithms designed to track a single solution in unconstrained, dynamic environments. However, there are very few MMO PSO algorithms developed for tracking multiple solutions in unconstrained, dynamic environments, and currently no MMO PSO algorithms capable of of tracking multiple solutions in dynamically constrained, dynamic environments. The primary objective of this study is to develop a MMO PSO algorithm capable of solving dynamic optimization problems with dynamic constraints, with the ability to find all feasible solutions.
Aksel TheleAksel Thele
Mobile Telecommunications Limited, Namibia
Honey bee optimization for dynamic environment Many real life problems can be formulated as dynamic optimization problems (DOPs). In a DOP the environment changes over time presenting a challenge to optimization algorithms that optima have to be found and tracked as the environment changes. Efficient honey bee algorithms (HBAs) have been developed to find optima for static optimization problems. This thesis evaluates the performance of HBAs on DOPs. A number of modifications of HBAs are empirically evaluated on an extensive benchmark set of twenty seven DOP classes. The thesis quantify and compares the effectiveness of each modification strategy. In the end recommendations are made on which modification strategies are to be considered state-of-the-art and to be included in future studies.
JP van ZylJP van Zyl
Rule Extraction using Set-Based Particle Swarm Optimization Rule extraction using set covering approaches can be formulated as a set-based optimization problem. This study formally defines rule extraction as a set-based optimization problem, and then develops a set-based particle swarm optimization (SBPSO) algorithm to extract accurate and simple rules from classification data sets. The SBPSO for rule extraction is first applied to a single-objective formulation of the rule extraction problem, and then extended to a multi-objective formulation. The SBPSO will then also be adapted to extract rules for data streams where concept drift is experienced.


Student Thesis Title Abstract
James Faure James Faure
Image Classification and Recognition of X-rays Used to Label Teeth and Teeth Abnormalities in Dental Analysis Analysis of an X-ray for any dentist can be time consuming and subject to human error. This research will develop machine learning technologies to automate analysis of dental X-rays for the purpose to determine if there are any abnormalities with any of the teeth. The resulting model can be implemented on an app platform which can easily be accessed by orthodontic radiologists, and especially useful to those in rural areas where there are no dentists available.
Faith Msibi Faith Msibi
Enhancing Clinical Text Classification by Dealing with Lexical-Semantic Issues in Clinical Corpus Clinical notes are the backbone of patient's care written by medical practitioners and all other allied health professionals involved in the patient's care. However, knowledge discovery from clinical data sets is very challenging and complex because these narrative texts comprise health records that are very large, sparse, heterogeneous, use technical vocabulary and have noise and random errors. We aim to contribute to the area of clinical text mining by using Natural Language Processing (NLP) to develop a pre-processing system of corpora that will address noise, semantic and imbalanced data issues in clinical texts to enhance clinical text classification performance and further support clinical decisions. The proposed system is based on passing collected clinical reports into the pre-processing step for lexical, syntactic, and semantic verification and medical concepts extraction. This is followed by feature engineering which outputs clean clinical text, annotated with medical concepts. This narrative text will be ready to be used in training Machine Learning and Deep Learning models for clinical text classification.
Refiloe Shabe Refiloe Shabe Reinforcement Learning for Portfolio Optimization In financial investing, the goal is to dynamically allocate a set of assets to maximize the returns overtime and minimize risk simultaneously and this process is known as Portfolio Optimization. Portfolio optimization has been studied extensively by researchers in financial engineering with novel work frequently published. These studies have explored various portfolio models with differing approaches which include constrained or unconstrained, single or multi-objective approaches. Investors have begun to turn to machine learning applications to analyze financial markets because accurate stock market predictions can lead to lucrative results. Reinforcement learning is a type of approximate dynamic programming and has become one of the hotspots in the modern developments of machine learning. Research groups such as DeepMind and OpenAI achieved significant breakthroughs when they proved that RL can be useful for solving complex problems in game theory, control theory and robotics. The purpose of the study is to extrapolate modeling capabilities of RL in financial engineering and explore its algorithms for portfolio optimization. Advances in sequential decision making through the concept of Reinforcement Learning have been instrumental in the development of multistage stochastic optimization, which is a key component in sequential portfolio optimization strategies. We will consider different formulations of the problem, static and dynamic with single objective and multi-objective approaches. The analysis will provide a conclusive support for the ability of reinforcement learning methods to act as universal trading agents, which are able to reduce the computational and memory complexity and also serve as generalizing strategies across assets regardless of the trading universe they have been trained on.
Werner van der MerweWerner van der Merwe
Model Tree Forests This research will develop a model tree ensemble for use on large data sets where the predicted target values are numerical. The performance of this model tree forest will be compared with a single induced model tree. Various aspects that influence the performance of the model tree forest will be investigated, uncluding approaches to fuse the decisions of the individual model trees, to select a subset of features to construct model trees on, and how data subsampling should be done for each induced model tree.
Daniel von Eschwege Daniel von Eschwege
Self-Adaptive Meta-Heuristics using Cultural Algorithms Cultural algorithms (CA) are evolutionary algorithms which maintain a belief space in parallel with a population space. The population space represents a set of candidate solutions to the optimization problem, and the belief space maintains a set of beliefs about where in the search landscape and optimum resides. Any population-based metaheuristic can be used in the population space to find an optimal solution to the relevant optimization problem. The belief space is a collection of "beliefs" formed by a few individuals in the population as to where in the search space these individuals believe the optimal solution can be found.
Meta-heuristics have control parameters, with different control parameter configurations resulting in different levels of performance. Control parameter configurations are also very problem dependent, and usually requires computationally expensive parameter tuning prior to solving the problem, which has to be repeated for each new problem. Conversely, self-adaptive algorithms adjust control parameter values during the optimization process. Considering particle swarm optimization (PSO) specifically, several self-adaptive, but inefficient approaches have been developed.
This research will develop a CA approach to search for the optimal PSO control parameter values used in the population space, by defining a belief space to represent the control parameter space. The belief space will indicate parts of the control parameter space where the best performing individuals believe the best control parameter values can be found. Each individual will then sample values for its control parameters from this belief space. Different strategies to update and utilize the belief space will be developed to prevent premature convergence in both the belief and population spaces. The result aims to be a more efficient self-adaptive PSO algorithm. The approach will be extensively empirically analyzed.

Data Science

Student Thesis Title Abstract
Shaun Joubert Shaun Joubert
Rule Extraction from Financial Time Series Traditional time series analysis focuses on modeling and forecasting. This project will move beyond the traditional focus, to the discovery of patterns or underlying relationships among the time series data, an approach generally referred to as rule extraction from time series data. Such rule extraction techniques can be very useful in aiding decision making. This project will explore the benefit of rule extraction techniques specifically on financial time series. A review of rule extraction approaches for time series will be conducted, whereafter a rule extraction approach for financial time series will be developed and evaluated.
Rossouw Landman Rossouw Landman
*Comparing The Viability of Various Unsupervised Machine Learning Models in Identifying Financial Time Series Regimes and Regime Changes * Financial stock data has greatly been studied over many years with an objective of generating the best possible return on an investment. It is known that financial markets move through periods where securities are increasing in value (bull markets) and periods where these securities decrease in value (bear markets). These periods that exhibit similarities over different time frames are often referred to as regimes that are not necessarily limited to bull and bear regimes but any sequences of data that experiences correlated trends. Regime extraction and detection of regime shift changes in financial time series data can be of great value to an investor. Understanding when these financial regimes will change and in what type of regime the financial market is tending towards, can help improve investment decisions and strengthen financial portfolios. My research focuses on reviewing and comparing the viability of different regime shift detection algorithms when applied to multivariate financial time series data. The selected algorithms are applied on different stocks from the Johannesburg Stock Exchange (JSE) where the algorithms' performances are compared with respect to regime shift detection accuracy and profitability of regimes in selected investment strategies. This research is also done in collaboration with NMRQL who focus an AI driven investment management.
Ashail Maharaj Ashail Maharaj
Review and Analysis of Big Data Clustering Approaches The aim of this project is to critically review and analyse clustering approaches and their suitability for use with big data. Big data can be defined using the 4 V's : Velocity, Variety, Volume and Veracity. Prior literature has shown that some methods outperform others with respect to their given metrics. These methods, metrics and datasets have been chosen to be run as an experiment to see how the model performs on different metrics across the datasets. The results will then be summarised and the approaches will be further evaluated to understand why certain approaches outperform others with respect to the given metrics and datasets. To do this, the chosen approaches will be broken down into their algorithmic components to understand where the gains, losses or trade-offs in performance originate from. This project will then conclude with a performance summary of the approaches used, key reasons for the approaches' performance with respect to given metrics and advice on when to use each approach in contrast to another.
Tristan Mckechnie Tristan Mckechnie
Feature engineering approaches for financial time series forecasting using machine learning The purpose of this project is to investigate and explore various feature engineering approaches for time series machine learning based forecasting. An important step in forecasting time series data is preprocessing the data with the objectives of; reducing noise, increasing signal-to-noise ratio, removing trends and reducing the feature space. The methods which can be applied to achieve these objectives are known as featuring engineering approaches. All work is carried out in the specific context of financial time series data.
This work begins by providing a literature review of relevant feature engineering approaches used for financial time series machine learning based forecasting. From this a variety of different feature engineering approaches are identified and empirically investigated on a case study data set. For the case study multiple machine learning models are implemented to forecasting a financial time series. The various machine learning models are implemented using open-source libraries. The specific models tested are driven by popular models found in literature for similar data sets.
The purpose of investigating the different feature engineering methods is to determine when these methods are suitable and useful. If possible, specifically identifying under which time series characteristics each method is applicable and useful. For each method investigated, a high-level theoretical overview and an example of how to correctly use the method, are given. All methods are finally tested with the case study data.
The case study focuses on forecasting stock market prices. As is common in literature, different stock markets data is used. This is because different markets experience different characteristics, some which may or may-not favour the feature engineering and / or models implemented. Given this case study (stock market data) some challenges found in literature for this problem are; price data contains high noise, the time series are typically non-stationary with trends causing auto-correlation, the data is typically highly non-linear and the vast quantity of data available for the markets results in complex high-dimensional problems. The follow feature engineering approaches have been identified and are investigated; noise reduction by means of filtering using Fourier and Wavelet transforms and Kalman filters. Dimensionality reduction using auto-encoders, principal component analysis, independent component analysis and kernel principal component analysis.
Robert Mokakatlela Robert Mokakatlela Physioplus Course Recommendation Based on Content Affinity with Browsing Behaviour A recommender system (RS) filters and provides relevant content to the user based on a historic user’s behaviour; formulated from interactions between the user and the item. Recommendations help to overcome the distressing search problem for the user as Physioplus course subscribers have very specific educational needs. Hypothetically, enhanced course recommender engine possesses potential to increase Physioplus subscriber’s satisfaction and reduce cancellations. The current approach has some limitations. It uses keyword search and static course recommendations. It uses elastic site search without considering historic user site visits.
The purpose of this dissertation is to build a better course recommender system. Recommender that can take a user’s recent Physiopedia browsing history (BH) and provide the user with a tailored and rank-ordered list of those courses that are most relevant to their entire content history. The recommender is built using collaborative filtering (CF) technique; item and user-based approach. Natural language processing (NLP) and neighbourhood similarity methods (NSM) are used to complement collaborative filtering in achieving quality recommendations.
Recommendation system make use of training and testing dataset from real-world system to assess the overall performance of the proposed approach. Performance evaluation is then measured by standard metrics being precision, recall as well as confusion matrix.
Aveer Nannoolal Aveer Nannoolal
Financial Time Series Modelling using Gramian Angular Summation Fields Gramian angular summation fields (GASF) and Markov transition fields (MTF) have has been developed as an approach to encode time series into different images, which allows the use of techniques from computer vision for time series classification and imputation. These techniques have been evaluated on a number of different time series problems. This project will apply GASF and MTF to financial time series. As a first step, financial time series will be encoded into images, and the dissimilarity among these time series images will then be determined. The resulting images will then be clustered, and an analysis conducted to determine if times series grouped in the same cluster exhibit the same time series characteristics. The project will also develop an approach to predict trends based on these time series images.
Judene Simonis Judene Simonis
Forecasting financial markets with machine learning TBC.

Doctoral Students

Computer Science

Student Thesis Title Abstract
Adekoya AdekunleAdekoya Adekunle Multi-Objective Optimization For Dynamic Incremental Machine Learning Algorithms Due to data streams becoming more prevalent, research to improve the understanding, analysis and processing of big data stream is very active. The main goal of these research is to improve prediction and decision-making based on data streams. However, many of these data streams are generated and processed in environments that are characterized by uncertainty, such as temporal changes to the statistical properties of the data stream. A number of research studies are ongoing on how to handle the uncertainty around data streams.
As a result of the forgoing, this research aims to investigate the efficacy of evolutionary and swarm-based multi-objective optimization techniques to develop machine learning predictive models for data streams. An important considaration for for developing these predictive models is the presence of concept drift, where the statistical distribution of data and/or target variables may change with time. The consequences of concept drift include degradation in performance, and changes in the optimility of the resulting model architecture.
This research will formulate machine learning in the presence of concept drift as a dynamic multi-objective optimization problem, where the objectives are to optimize prediction accuracy and to optimize model architecture (in order to prevent overfitting and underfitting). Both objectives are dynamic, due to the consequences of concept drift.
Multi-objective machine learning predictive models for data streams will be developed and extensively evaluated. These predictive models will then combined into a heterogeneous ensemble model, and the performance of this ensemble will be evaluated in comparison with the individual machine learning models.
Dave Bockus High Dimensional Fitness Landscape Analysis Fitness landscape analysis attempts to determine features of an error landscape defined by some function. Landscapes can be defined as having plateaus, gentle or severe gradients toward local or global optima, or by defining the ridges and barriers of the landscape. In essence landscapes of high dimensional error surfaces often are synonymized with geological landscapes to give a visual reference to the feature. Thus, the error surface will affect how one transcends over the landscape searching for an optimal point on the landscape.
Search methods (e.g. particle swarm optimization, genetic algorithms and gradient descent, amongst others) which respond to the features of the landscape in order to move towards optima, do so largely independent of a-priori knowledge of the underlying error surface. Thus, any tuning of control parameters for any of the variety of algorithms transcending the error surface is done blindly, where dynamic alteration of those control parameters are independent of the local error surface features. Any dynamic tuning of control parameters to date is a result of the algorithms behaviour with respect to the error surface, but not directly to characteristics of the error surface.
What is needed is a way of extracting local (possibly non-local) error surface features, ideally during the search process, which are applied back to the algorithms to tune control parameters in order to enhance that algorithms search capability. Two approaches can be followed: The first is to conduct tuning based on landscape characteristics prior to running the optimization algorithm, thereby using global landscape information. The second approach results in a self-adaptive approach where local landscape information is used to guide the control parameter tuning in real-time during the optimization process.
Current methods of extracting error surface features encompass the use of random walks over the error surface in order to obtain a limited set parameters which are used for tuning. Parameters which measure the neutrality of the surface or the slope have been used in attempts to link the error surface to the search algorithm. Unfortunately, the concept of random walks does not put sufficient spatial context to sampled points on the error surface. Thus only the generalized metrics mentioned above can be extracted. Research has shown that those metrics have been used with limited success in tuning search algorithms.
What is needed is an extraction of useful features from the error surface which can expand and thus define one’s image of what the error surface looks like. This leads to a more traditional view of an error surface, one which parallels that of natural geology, and include features such as hill, valleys, plateaus etc. The one underlying issue is that error surfaces in which PSO, GA and Neural Networks operate are high dimensional and do not lead to visualization. Extracting features from high dimensional surfaces has proven difficult in the realm of providing context between surface and algorithm. The objective of this study is to developed an approach where the search space is reduced to a smaller dimensional space, and the fitness landscape analysis is done in this smaller-dimensional space.
Werner MostertWerner Mostert
Amazon Web Services
Insights into the feature selection problem through landscape analysis TBC
Taiwo OmomuleTaiwo Omomule Heterogeneous Mixtures of Experts As is the case with human experts, machine learning algorithms have a learned bias, which results in different machine learning experts, created from the same dataset with different predictions. To address this problem, mixtures of experts have been developed. Mixtures of experts is an approach in machine learning to significantly improve the performance of predictive models by considering an aggregation of multiple machine learning algorithms such as neural network ensembles, random forests, k-nearest neighbour ensembles, amongst others. However, classical mixtures of experts are mostly homogeneous in that all the experts in the mixture model are multiple instances of the same machine learning algorithm. While such an approach is still efficient, performance of mixtures of experts can be significantly improved if different types of machine learning algorithms are included, thus capitalizing on the strengths and inductive biases of a diverse set of experts which will result in a good balance of the advantages of these different ML experts used in the mixture model. The rationale behind this approach to heterogeneous mixture of experts modeling is that no one machine learning algorithm performs best on all problems, and that different algorithms show different advantages and disadvantages based on the problem characteristics.
Amani SaadAmani Saad
Differential Evolution and Optimal Population Sizes Parameter control is a significant topic in the design of evolutionary algorithms (EAs). The performance of EAs is greatly affected by the selection of control parameters. Therefore, optimal selection for values of control parameters is particularly noteworthy research field. One common control parameter among all EAs is the population size. Differential Evolution (DE) is sensitive to its control parameters which are the crossover rate, the scale factor and the population size. Despite the fact of having population size as an important control parameter which significantly influences the performance of DE, the volume of work dedicated to address the population size indicates that this aspect is still under-investigated. A number of empirical studies have advised that setting the population size should be related to the problem dimensionality. Based on these empirical studies, a general perception within the DE research community that advocates setting the size of a DE population to 10 times the dimension of the problem prevailed. However, the conclusions derived from these studies were based on very limited benchmark suite containing only a few benchmark functions and hence are not suitable for all problems instances. Also, the common method of increasing the population size gradually to achieve better performance is subjective. A clear incremental strategy was not defined. Instead, rules of thumb were suggested as a user guide. The main objective of this research is to empirically analyze DE with respect to optimal population sizes, and to derive correlations between optimal population size and fitness landscape characteristics. The impact of different population sizes on search behavior will also be investigated.


Student Thesis Title Abstract
Olabanji AsekunOlabanji Asekun
Dynamic Passenger train scheduling for South Africa using Particle Swarm Optimisation The train time tabling problem is a complex problem because there are in most cases multiple dynamic objectives and dynamic constraints that are required to be satisfied. Optimization methods are mostly used to address these problems because of the ability to find a feasible solution in a reasonable amount of time. This research aims to develop a particle swarm optimisation optimisation algorithm to solve the train time tabling problem in South Africa to reduce delays caused as a result of aging infrastructure and vandalism.
Emmanuel BuabinEmmanuel Buabin Noncommutative Time Series Feature Extraction with Banach Lie Algebra In this thesis, the focus is directed at algebraic evolutionary time series feature extractor conceptualization, design and implementation. To be specific, a mathematical theory that constitute 1) specialized Banach/Hilbert space, 2) specialized Banach Lie related Algebra and 3) specialized body of mechanics (quantum motivated), is motivated for the overarching goal of algebraic time series feature data production, machine learning framework modelling and other interactive concept modelling. The time series feature extractor, equipped with, constituting novel algebraic evolutionary (swarm) time series feature learning procedures, is adopted for feature extraction duty on produced (algebraic) time series datasets, within a specific time series problem context. To ascertain performance levels, experimentations are varied across different parameters.
Kondwani MagambaKondwani Magamba
Crop management using predictive data analytics and leaf venation networks Malawi has an estimated population 18.6 million as per 2019 reports. It is expected that the population will double by 2038. This increase poses a threat not only to sustainable food production but also food security and this may impact on the country's drive to achieve one of the sustainable development goals of the United Nations (UN)- Goal 3. Agricultural production in Malawi is hindered by many factors including crop pest and diseases, inability to predict crop yield reliably and lack of information about meteorological conditions, soil properties and land cover.
There is need therefore that the identified challenges be overcome as Malawi's economy is predominantly agriculture based and makes up about 30% of the country's Gross Domestic Product and employs over 64% of the national workforce.
The goal of this study is to use machine learning (ML) techniques to develop models for crop yield prediction; disease; crop quality and crop species recognition. The study will achieve its objectives by using ML to study leaf venation networks of irish and sweet potatoes.
Noma MkwananziNoma Mkwananzi
Fitness Landscape Analysis of Neural Networks for Regression Problems The aim of training an artificial neural networks (ANN) is to determine a set of weights that minimize the error rates. An optimization algorithm is used to train the ANN, that is, adjust the weights of the ANN. An understanding of the ANN search landscapes may help to better inform on the best optimization and even architecture of ANN for regression problems. Fitness landscape analysis is one approach that can be used. A study on fitness landscape analysis has been carried out to understand the characteristics of the search space of neural networks for classification problems. This research is an extension to this study. It will focus on regression problems and it seeks to determine if landscape properties of regression problems vary from classification problems landscape properties.
Timothy CarolusTimothy Carolus
*Control Parameter Importance and Stability Analysis of Population-based Algorithms * A common problem in the design of optimization algorithms is ensuring that the sequence of solutions converges. This problem becomes more problematic for population-based meta-heuristics. One such group of iterative algorithms are swarm intelligence based algorithms, such as particle swarm optimization (PSO). Stability conditions have been derived on the control parameters of a class of such population-based algorithms, where the position updates can be reformulated in a specific recurrence relation. This research will investigate a number of swarm intelligence based algorithms and work towards reformulation of their position updates in the standard recurrence relation. From this, stability conditions will be drived for these algorithms, to provide guidance on how values for control parameters should be initialized to guarantee that an equilibrium state will be reached. Furthermore, an analysis of the control parameter importance within this region is carried out using functional analysis of variance. This study is applied to both single objective and multi-objective optimization algorithms.
Webster GovaWebster Gova
Data Science Manager, Umuzi Academy
A novel machine learning approach to forecast production structure evolution The product-space methodology (PSM) has emerged as a strong contender for the stochastic prediction of country level economic growth behaviours. Measures calculated from PSM provide measures a simplified way to identify a nation's global export positioning in an industry and industries it must target for export growth. Application of PSM to understand economic development also makes it easier to infer trade data on the likelihood of different products to be exported together. PSM has some shortcomings which have not been fully addressed to date. Some of the shortcomings include the methodology’s static nature in only analyzing one year at a time. The PSM’s insistence on attributing exports only to domestic factors while dismissing contributions from global supply chains have also faced great criticism, including the fact that it suffers from the limitations posed by trade classifications to reflect production structures or skills embedded in exported products.
Investigations of mathematical interpretations to understand and interpret PSM metrics for possible optimization through Machine Learning (ML) algorithms have shown great promise. There is limited evidence in literature reviewed to date that ML approaches have been used to understand how changes in production structures over time have contributed to economic transformation through diffusion of knowledge and capabilities in the network of product relatedness. Our study is developing robust and efficient ML techniques capable of processing multiple time series on trade data capable of forecasting and inferring stochastic dependency of future economic growth on historical trade data. The study will formalize multiple-step forecasting problems as supervised learning tasks that can be achieved in three major steps; (i) feature extraction to characterise each time series to reduce dimensionality, (ii) using extracted features as local learning approximators for clustering of multiple time series and (iii) forecasting based on salient features of each cluster. The performance of ML approaches in this study will be benchmarked against economic growth measures from PSM before multi-step forecasting of economic development and changes in production structure are performed.
Chucknorris MadamombeChucknorris Madamombe
Afriadi Group (Pty) Ltd
Review and Analysis of Swarm Based Algorithms for Optimization Due to their powerful and resourceful performance for solving difficult optimization problems, swarm-based algorithms have been of much interest to many researchers in the scientific domain. All these swarm-based algorithms have been inspired by the natural behavior of swarms of biological organisms e.g. animals, birds, bacteria, insects, fish and amphibians. It has been shown that these organisms provide a unique set of characteristics that can be used to design new swarm algorithms. Thus, the fascinating activities that are observed on a day to day basis in nature has been used as the basis for the formulation of new techniques for solving sophisticated problems in real life. A surfeit of swarm-based algorithms have been proposed since 1992 when they were first published. These swarm-based algorithms have been successfully used to solve sophisticated real-life optimization problems. Even though each of these swarm-based algorithms is supported by an analogy from nature, based on some nature-inspired metaphor, their mathematical/algorithmic models are almost similar or at least share significant overlap.
The initial phase of the proposed study will be to conduct an extensive literature review on the available swarm-based algorithms. A total of 80 swarm-based algorithms will be listed. The study will be further narrowed down to review only the most popular algorithms based on Google Scholar citation counts. Only 65 most popular swarm-based algorithms will be reviewed. The review will cover the background (source of inspiration) of each swarm-based algorithm, the mathematical model as well as the algorithmic model of each swarm-based algorithm. The major focus of the proposed study is to examine the mathematical models of each algorithm and to draw out similarities and differences from these swarm-based algorithms. The descriptions of these swarm-based algorithms will be as extensive as possible.
The main goal of this research is to identify and categorize swarm-based algorithms for optimization based on different views such as nature-inspired view, application class view, optimization problems class view, computational complexity view and mathematical/algorithmic model view. A critical review of swarm-based algorithms will be done with reference to these different views. The critical review will develop a taxonomization based on the different views.
The second goal of this research is to conduct an extensive empirical analysis of these algorithms on a large benchmark suite of continuous-valued, single-objective, static, boundary constrained optimization problems. The goal of the empirical analysis is to conduct a control parameter sensitivity analysis from which best values of the control parameters can be derived. The other goal of the empirical analysis is to identify the best algorithm(s) for specific optimization problem classes based on different performance criteria. The computational complexity, i.e. the actual execution time as well as the asymptotic complexity analysis, of each algorithm will be examined.
Robert NshimirimanaRobert Nshimirimana
Optimization of Digital Radiography Using Multi-objective Particle Swarm Optimization Radiography is a 2-D transmission imaging technique that is extensively used for non-destructive investigation of materials. The integrity of the investigation depends on the quality of the image which is obtained by arranging the radiography system parameters in such a way that it approaches a compromised optimum. A manual optimization is time consuming, labour intensive, and prone to human error. This research aims to develop an automated radiography system optimizer based on multi-objective particle swarm optimization to provide scanning or design parameters in the form of a set of Pareto optimal solutions for a radiography system.
Zander WesselsZander Wessels
NMRQL Research
A Walk-Forward Multi-Factor Machine Learning Investment Process The investment management industry is going through a paradigm shift: from biased and expensive human-centric investment decision making, to unbiased, scalable, adaptive, and testable algorithmic investment decision making at lower costs. This shift is being driven by cutting-edge machine learning algorithms, large amounts of structured and unstructured data, and processing power. Thus, the goal of this thesis is to propose an online collective intelligence framework where online machine learning algorithms and fundamental financial models can develop different views on the securities and assets in question. After the algorithms have voted on which assets they believe will go up or down in the future, portfolios can be constructed using heuristic algorithms, e.g. PSO. Because these models are unbiased and behave reliably, they can be simulated robustly through time. These simulations accounts for survivorship bias, lookahead bias, transaction costs, market impacts, liquidity risk, and risk management.

Post-doctoral Fellows

Student Thesis Title Abstract
Sunday OladejoSunday Oladejo
Meta-heuristics for Training Support Vector Machines One of the widely known supervised Machine Learning (ML) models is the Support Vector Machine (SVM). In recent times SVMs have found applications in several fields such as bioinformatics, finance, geoinformatics, pattern recognition, and security. However, the accuracy of supervised ML models is greatly affected by concept drift in which SVMs are no exception. Concept drift causes the features or statistical properties of a model to change over time in unexpected ways. Hence, the predictions and classifications of such models become less accurate over some time. To improve the performance of SVMs in the face of concept drift, this research employs novel techniques by engaging metaheuristics to train the hyper-parameters of SVMs to be adaptive and more accurate. Moreover, new metaheuristics will be developed that can effectively train or tune the hyper-parameters of the SVMs.