# Past Students¶

The following are postgraduate students who have graduated under my supervision or co-supervision.

## Project Students¶

Student | Degree | Project Title | Abstract |
---|---|---|---|

Jonathan Coffey jonocof@gmail.com | BSc Hons (Computer Science) | A Visualisation Tool for Population-based, Meta-heuristic Algorithms | Many real-world optimisation problems have been solved using population-based computational intelligence (CI) algorithms. These problems produce varying search landscapes in high dimensions that can cause disruption in many of these CI algorithms. A visualisation tool that depicts the search landscapes of higher dimensional optimisation problems and how a set of population-based algorithms traverse the solutions in these search landscapes, can greatly improve the understanding of the algorithms, whether the user is a novice or experienced in the field of CI. This project will develop such visualisation tool. |

James Faure jamesfaure@icloud.com | B.Eng (Industrial Engineering) | Image Classification and Recognition of X-rays Used to Label Teeth and Teeth Abnormalities in Dental Analysis | Analysis of an X-ray for any dentist can be time consuming and subject to human error. This project will develop machine learning technologies to automate analysis of dental X-rays for the purpose to determine if there are any abnormalities with any of the teeth. The resulting model can be implemented on an app platform which can easily be accessed by orthodontic radiologists, and especially useful to those in rural areas where there are no dentists available. |

Michaela Govender | B.Eng (Industrial Engineering) | Adaptive Polynomial Approximation Approach to Data Streams | This study will develop a new approach to modeling non-stationary data. The approach will make use of polynomial approximation where the coefficients of a standard polynomial will be determined using particle swarm optimization (PSO). A quantum-inspired PSO algorithm will be used to allow the polynomial to adapt to non-stationary environments. Afterwhich, the effectiveness of this approach will be evaluated. |

Neil Lubbe neil.lubbe70@gmail.com | B.Eng (Industrial Engineering) | Using A Predictive Data Analytics Model to Predict The Ageing Potential of South African Red Wines | Tasting wine remains a subjective experience and, furthermore, perceiving the ageing potential of a wine is an equally subjective experience. The unique arrangement of chemical compounds in a wine influences its tasting notes as well as the perception of its ageing potential. Furthermore, the development of these chemical compounds over time are responsible for the successful (or unsuccessful) ageing of the wine. It must, however, be noted that the tasting notes of a wine are not necessarily correlated with its chemical composition, nor is it known whether the one can predict the other. Fortunately, there is a tendency for experienced winemakers to agree on wines that have aged well and wines that have not aged well over time. Hence, the purpose of this project is to determine whether the actual/realized ageing potential of wine can be predicted by utilizing the initial tasting perception and/or the initial chemical composition of the wine. |

Annika Nel annika.nel07@gmail.com | BSc Hons (Computer Science) | A Set-Based Approach to Training Support Vector Machines | Training a support vector machine (SVM) is a constrained optimisation problem which involves constructing an optimal class separating hyperplane. This has traditionally been achieved using a vector-based Lagrange multiplier approach. The objective of this project is to redefine the training process as a set-based optimisation problem and construct an SVM using set-based particle swarm optimisation. Such an implementation could provide several advantages over existing approaches. For example, unlike the vector-based approach, the dimension for candidate solutions would not have to be specified, and different candidate solutions may be of different sizes. Furthermore, a set-based solution would reduce the computational cost of training an SVM, resulting in a more scalable and efficient classifier. |

Stefan van Deventer 20273401@sun.ac.za | BSc Hons (Computer Science) | Learning to Trade from Zero-Knowledge | This research involves the implementation and analysis of a competitive co-evolutionary particle swarm optimization (PSO) algorithm to develop trading strategies from zero-knowledge. The competitive co-evolutionary PSO will be used to train a neural network (NN) to produce trading decisions. The focus will be on South African stocks as traded on the Johannesburg stock exchange (JSE). The aim is to evaluate different particle swarm optimization and competitive co-evolutionary strategies to see if it is feasible to use the resulting neural network traders in the real world. As part of the analysis, issues such as the saturation of the neural network hidden units due to using PSO as training algorithm will be addressed. Because trading is ultimately a dynamic optimization proble, a quantum PSO will then be implemented to determine if performance of the neural network trader can be improved. |

JP van Zyl 20706413@sun.ac.za | BSc Hons (Computer Science) | Set-based Particle Swarm Optimization for Polynomial Approximation | This project aims to develop a set-based particle swarm optimization algorithm to find optimal polynomials to approximate continuous mappings. |

## Masters Students¶

Student | Degree | Thesis Title | Abstract |
---|---|---|---|

Adejumo, A | MSc 1999 University of Pretoria | Active learning Algorithms for Multi-layer Feedforward Neural Networks | Backpropagation has played a vital role in the resurgence of interest in artificial neural networks. Eversince, a lot of research effort concentrated findillg ways to improve its performance. Active learning has emerged as an efficient alternative to improve the performance of multilayer feedforward neural networks. The leamer is given active control over the information to include in the training set, and in doinng so, the generalization accuracy is improved and the computational cost and complexity of the network are reduced compared to training on a fixed set of data. While many research effort has been invested in designing new learning approadches an elaborate comparison of active learning approaches is still lacking. The objective of this research study is to compare and critisize active learning approaches and also to propose a new selective learning algorithm. This thesis presents a comparison of four selected active learning algorithms. This thesis concentrates on one type of application, namely function and time series approximation. |

Rodic, D | MSc 2000 University of Pretoria | Hybrid Exhaustive Search for Knowledge Discovery | The topic of this thesis is knowledge discovery and artificial intelligence based knowledge discovery algorithms. The knowledge discovery process and associated problems are discussed, followed by an overview of three classes of artificial intelligence based knowledge discovery algorithms. Typical representatives of each of these classes are presented and discussed in greater detail. Then a new knowledge discovery algorithm, called Hybrid Classifier System (HCS), is presented. The guiding concept behind the new algorithm was simplicity. The new knowledge discovery algorithm is loosely based on schemata theory. It is evaluated against one of the discussed algorithms from each class, namely: CN2, C4.5, BRAINNE and BGP. Results are discussed and compared. A comparison was done using a benchmark of classification problems. These results show that the new knowledge discovery algorithm performs satisfactory, yielding accurate, crisp rule sets. Probably the main strength of the HCS algorithm is its simplicity, so it can be the foundation for many possible future extensions. Some of the possible extensions of the new proposed algorithm are suggested in the final part of this thesis. |

Ismail, A | MSc 2002 University of Pretoria | Training and Optimization of Product Unit Neural Networks | Product units in the hidden layer of multilayer neural networks provide a powerful mechanism for neural networks to efficiently learn higher-order combinations of inputs. Training product unit neural networks using local optimization algorithms is difficult due to an increased number of local minima and increased chances of network paralysis. This thesis discusses the problems with using local optimization, especially gradient descent, to train product unit neural networks, and shows that particle swarm optimization, genetiC algorithms and leapfrog are efficient alternatives to successfully train product unit neural networks. Architecture selection, i.e. pruning, of product unit neural networks is also studied and applied to determine near optimal neural network architectures that are used in the comparative studies. |

Brits, R | MSc 2003 University of Pretoria | Niching Strategies for Particle Swarm Optimization | Evolutionary algorithms and swarm intelligence techniques have been shown to successfully solve optimization problems where the goal is to find a single optimal solution. In multimodal domains where the goal is to locate multiple solutionas in a singe search space, these techniques fail. Niching algoritithms extend existing global optimization algorithms to locate and maintain multiple solutions concurrently. This thesis develops strategues that utilize the inique characteristics of the particle swarm optimization algorithms to perform niching. Shrinking topological neighbourhoods and optimization with multipe subswarms are used to identify and stably maintain niches. Solving systems of equations an dmultimodal functions are used to demonstrate the effectiveness of the new algorithms. |

Potgieter, G | MSc 2003 University of Pretoria | Mining Continuous Classes using Evolutionary Computing | Data mining is the term given to knowledge discovery paradigms that attempt to infer knowledge, in the form of rules, from structured data using machine learning algorithms. Specifically, data mining attempts to infer rules that are accurate, crisp, comprehensible and interesting. There are not many data mining algorithms for mining continuous classes. This thesis develops a new approach for mining continuous classes. The approach is based on a genetic program, which utilises an efficient genetic algorithm approach to evolve the non-linear regressions described by the leaf nodes of individuals in the genetic program's population. The approach also optimises the learning process by using an efficient, fast data clustering algorithm to reduce the training pattern search space. Experimental results from both algorithms are compared with results obtained from a neural network. The experimental results of the genetic program is also compared against a commercial data mining package (Cubist). These results indicate that the genetic algorithm technique is substantially faster than the neural network, and produces comparable accuracy. The genetic program produces substantially less complex rules than that of both the neural network and Cubist. |

Zowa, G | M.BA 2003 University of Pretoria | Technical Survival: Strategic Technology Migration for Next Generation Networks Telecommunication Business | |

Van Wyk, D | M.IT 2003 University of Pretoria | A Genetic Programming Approach to Normalizing Databases | |

Paquet, U | MSc 2003 University of Pretoria | Training Support Vecor Machines With Particle Swarms | Particle swarms can easily be used to optimise a function with a set of linear equality constraints, by restricting the swarm's movement to the constrained search space. A linear particle swarm optimiser and a converging linear particle swarm optimiser is developed to optimise linear equality-constrained functions. It is shown that if the entire swarm of particles is initialised to consist of only feasible solutions, then the swarm can optimise the constrained objective function without ever again consideringthe set of constraints. The converging linear particle swarm optimiser overcomes the linear particle swarm optimiser's possibility of premature convergence. Training a support vector machine requires solving a constrained quadratic programming problem, and the converging linear particle swarm optimiser ideally fits the needs of an optimisation method for support vector machine training. Particle swarms are intuitive and easy to implement, and is presented as an alternative to current support vector machine training methods. |

Graaff, A | MSc 2004 University of Pretoria | The Artificial Immune System with Evolved Lymphocytes | The main purpose of the natural immune system is to protect the body against any unwanted foreign cells that could infect the body and lead to devastating results. The natural immune system has different lymphocytes to detect and destroy these unwanted foreign patterns. The natural immune system can be modeled into an artificial immune system that can be used to detect any unwanted patterns in a non-biological environment. One of the main tasks of an immune system is to learn the structure of these unwanted patterns for a faster response to future foreign patterns with the same or similar structure. The artificial immune system (AIS) can therefore be seen as a pattern recognition system. The AIS contains artificial lymphocytes (ALC) that classify any pattern either as part of a predetermined set of patterns or not. In the immune system, lymphocytes have different states: Immature, Mature, Memory or Annihilated. Lymphocytes in the annihilated state needs to be removed from the active set of ALCs. The process of moving from one state to the next needs to be controlled in an efficient manner. This dissertation presents an AIS for detection of unwanted patterns with a dynamical active set of ALCs and proposes a threshold function to determine the state of an ALC. The AIS in the dissertation uses evolutionary computation techniques to evolve an optimal set of lymphocytes for better detection of unwanted patterns and removes ALCs in the annihilated state from the active set of ALCs. |

Franken, N | MSc 2004 University of Pretoria | PSO-Based Coevolutionary Game Learning | This study aims to thoroughly investigate the exact nature of PSO-based coevolution techniques, with specific application to both the zero sum (Tic-Tac-Toe and Checkers) and non-zero sum (Iterated Prisoner's Dilemma) game domains. The results of the study illustrates the impact on performance of specific PSO parameters hoices, neighbourhood information sharing structure and game-specific representations. A number of enhancements to the study of game learning are also presented, including new additions to the areas of benchmarking and coevolution. |

Nel, G | MSc 2005 University of Pretoria | A Memetic Genetic Program for Knowledge Discovery | Local search algorithms have been proved to be effective in refining solutions that have been found by other algorithms. Evolutionary algorithms, in particular global search algorithms, have shown to be successful in producing approximate solutions for optimisation and classification problems in acceptable computation times. A relatively new method, memetic algorithms, uses local search to refine the approximate solutions produced by global search algorithms. This thesis develops such a memetic algorithm. The global search algorithm used as part of the new memetic algorithm is a genetic program that implements the building block hypothesis by building simplistic decision trees representing valid solutions, and gradually increases the complexity of the trees. The specific building block hypothesis implementation is known as the building block approach to genetic programming, BGP. The effectiveness and efficiency of the new memetic algorithm, which combines the BGP algorithm with a local search algorithm, is demonstrated. |

Peer, ES | MSc 2005 University of Pretoria | Serendipitous Software Framework for Facilitating Collaboration in Computational Intelligence | A major flaw in the academic system, particularly pertaining to computer science, is that it rewards specialisation. The highly competitive quest for new scientific developments, or rather the quest for a better reputation and more funding, forces researchers to specialise in their own fields, leaving them little time to properly explore what others are doing, sometimes even within their own field of interest. Even the peer review process, which should provide the necessary balance, fails to achieve much diversity, since reviews are typically performed by persons who are again specialists in the particular field of the work. Further, software implementations are rarely reviewed, having as a consequence the publishing of untenable results. Unfortunately, these factors contribute to an environment which is not conducive to collaboration, a cornerstone of academia -- building on the work of others. This work takes a step back and examines the general landscape of computational intelligence from a broad perspective, drawing on multiple disciplines to formulate a collaborative software platform, which is flexible enough to support the needs of this diverse research community. Interestingly, this project did not set out with these goals in mind, rather it evolved, over time, from something more specialised into the general framework described in this dissertation. Design patterns are studied as a means to manage the complexity of the computational intelligence paradigm in a flexible software implementation. Further, this dissertation demonstrates that releasing research software under an open source license eliminates some of the deficiencies of the academic process, while preserving, and even improving, the ability to build a reputation and pursue funding. Two software packages have been produced as products of this research: i) CILib, an open source library of computational intelligence algorithms; and ii) CiClops, which is a virtual laboratory for performing experiments that scale over multiple workstations. Together, these software packages are intended to improve the quality of research output and facilitate collaboration by sharing a repository of simulation data, statistical analysis tools and a single software implementation. |

Du Plessis, J | MSc 2006 University of Pretoria | ACODV: Ant Colony Optimisation Distance Vector Routing in Ad Hoc Networks | A mobile ad hoc network is a collection of wireless mobile devices which dynamically form a temporary network, without using any existing network infrastructure or centralised administration. Each node in the network effectively becomes a router, and forwards packets towards the packet’s destination node. Ad hoc networks are characterized by frequently changing network topology, multi-hop wireless connections and the need for dynamic, efficient routing protocols. This work considers the routing problem in a network of uniquely addressable sensors. These networks are encountered in many industrial applications, where the aim is to relay information from a collection of data gathering devices deployed over an area to central points. The routing problem in such networks are characterised by: The overarching requirement for low power consumption, as battery powered sensors may be required to operate for years without battery replacement; an emphasis on reliable communication as opposed to real-time communication, it is more important for packets to arrive reliably than to arrive quickly; and very scarce processing and memory resources, as these sensors are often implemented on small low-power microprocessors. This work provides overviews of routing protocols in ad hoc networks, swarm intelligence, and swarm intelligence applied to ad hoc routing. Various mechanisms that are commonly encountered in ad hoc routing are experimentally evaluated under situations as close to real-life as possible. Where possible, enhancements to the mechanisms are suggested and evaluated. Finally, a routing protocol suitable for such low-power sensor networks is defined and benchmarked in various scenarios against the Ad hoc On-Demand Distance Vector (AODV) algorithm. |

Messerschmidt, L | MSc 2006 University of Pretoria | Using Particle Swarm Optimization to Evolve Two-Player Game Agents | Computer game-playing agents are almost as old as computers themselves, and people have been developing agents since the 1950’s. Unfortunately the techniques for game-playing agents have remained basically the same for almost half a century – an eternity in computer time. Recently developed approaches have shown that it is possible to develop game playing agents with the help of learning algorithms. This study is based on the concept of algorithms that learn how to play board games from zero initial knowledge about playing strategies. A coevolutionary approach, where a neural network is used to assess desirability of leaf nodes in a game tree, and evolutionary algorithms are used to train neural networks in competition, is overviewed. This thesis then presents an alternative approach in which particle swarm optimization (PSO) is used to train the neural networks. Different variations of the PSO are implemented and compared. The results of the PSO approaches are also compared with that of an evolutionary programming approach. The performance of the PSO algorithms is investigated for different values of the PSO control parameters. This study shows that the PSO approach can be applied successfully to train game-playing agents. |

Grobler, H | MSc 2006 University of Pretoria | Surface Defect Detection by means of a Structural Light System | During the production of a motor vehicle, damage may be sustained by the components of the vehicle at various stages of the production process. As the vehicle being assembled progresses through the production process, the cost of correcting such damage increases significantly. This document describes a Surface Defect Detection System (SDDS) that was developed to detect damage to the outer panels of a vehicle, specifically the roof panel. The system makes use of a structural light configuration consisting of a laser and camera, together with appropriate image processing tools. The structural light system operates and is mounted on an industrial robot, that forms part of an existing motor vehicle production plant. Although the developed system is targeted specifically towards the detection of bumps (high spots) and dents (low spots) on the surface of the roof of the BMW 3-series, the implementation is sufficiently generic as to allow other panels to be scanned. The image processing software is implemented as the Surface Defect Detection System Framework (SDDSF). |

Duminy, W | MSc 2007 University of Pretoria | A Learning Framework for Zero-Knowledge Game Playing Agent | The subjects of perfect information games, machine learning and computational intelligence combine in an experiment that investigates a method to build the skill of a game-playing agent from zero game knowledge. The skill of a playing agent is determined by two aspects, the first is the quantity and quality of the knowledge it uses and the second aspect is its search capacity. This thesis introduces a novel representation language that combines symbols and numeric elements to capture game knowledge. Insofar search is concerned, an extension to an existing knowledge-based search method is developed. Empirical tests show an improvement over alpha-beta, especially in learning conditions where the knowledge may be weak. Current machine learning techniques as applied to game agents is reviewed. From these techniques a learning framework is established. The data-mining algorithm, ID3, and the computational intelligence technique, Particle Swarm Optimisation (PSO), form the key learning components of this framework. The classification trees produced by ID3 is subjected to new post-pruning processes specifically defined for the mentioned representation language. Different combinations of these pruning processes are tested and a dominant combination is chosen for use in the learning framework. As an extension to PSO, tournaments are introduced as a relative fitness function. A variety of alternative tournament methods are described and some experiments are conducted to evaluate these. The final design decisions are incorporated into the learning framework configuration, and learning experiments are conducted on Checkers and some variations of Checkers. These experiments show that learning has occurred, but also highlights the need for further development and experimentation. Some ideas in this regard concludes the thesis. |

Naicker, C | MSc 2007 University of Pretoria | Derating NichePSO | The search for multiple solutions is applicable to many fields (Engineering Science, Economics, and others). Multiple solutions allow for human judgement to select the best solution from a group of solutions that best match the search criteria. Finding multiple solutions to an optimisation problem has shown to be difficult to solve. Evolutionary computation (EC) and more recently Particle Swarm Optimisation (PSO) algorithms have been used in this field to locate and maintain multiple solutions with fair success. This thesis develops and empirically analyses a new method to find multiple solutions within a convoluted search space. The method is a hybrid of the NichePSO and the sequential niche technique (SNT). The original SNT was developed using a Genetic Algorithm (GA). It included restrictions such as knowing or approximating the number of solutions that exist. A further pitfall of the SNT is that it introduces false optima after modifying the search space, thereby reducing the accuracy of the solutions. However, this can be resolved with a local search in the unmodified search space. Other sequential niching algorithms require that the search be repeated sequentially until all solutions are found without considering what was learned in previous iterations, resulting in a blind and wasteful search. The NichePSO has shown to be more accurate than GA based algorithms. It does not require knowledge of the number of solutions in the search space prior to the search process. However, the NichePSO does not scale well for problems with many optima. The method developed in this thesis, referred to as the derating NichePSO, combines SNT with the NichePSO. The main objective of the derating NichePSO is to eliminate the inaccuracy of SNT and to improve the scalability of the NichePSO. The derating NichePSO is compared to the NichePSO, deterministic crowding [23] and the original SNT using various multimodal functions. The performance of the derating NichePSO is analysed and it is shown that the derating NichePSO is more accurate than SNT and more scalable than the NichePSO. |

Pun, J | MSc 2007 University of Pretoria | Gesture Recognition with Application in Music Arrangement | This thesis studies the interaction with music synthesis systems using hand gestures. Traditionally users of such systems were limited to input devices such as buttons, pedals, faders, and joysticks. The use of gestures allows the user to interact with the system in a more intuitive way. Without the constraint of input devices, the user can simultaneously control more elements within the music composition, thus increasing the level of the system’s responsiveness to the musician’s creative thoughts. A working system of this concept is implemented, employing computer vision and machine intelligence techniques to recognise the user’s gestures. |

Van der Stockt, SAG | MSc 2008 University of Pretoria | A Generic Neural Network Framework using Design Patterns | Designing object-oriented software is hard, and designing reusable object-oriented software is even harder. This task is even more daunting for a developer of computational intelligence applications, as optimising one design objective tends to make others inefficient or even impossible. Classic examples in computer science include ‘storage vs. time’ and ‘simplicity vs. flexibility.’ Neural network requirements are by their very nature very tightly coupled – a required design change in one area of an existing application tends to have severe effects in other areas, making the change impossible or inefficient. Often this situation leads to a major redesign of the system and in many cases a completely rewritten application. Many commercial and open-source packages do exist, but these cannot always be extended to support input from other fields of computational intelligence due to proprietary reasons or failing to fully take all design requirements into consideration. Design patterns make a science out of writing software that is modular, extensible and efficient as well as easy to read and understand. The essence of a design pattern is to avoid repeatedly solving the same design problem from scratch by reusing a solution that solves the core problem. This pattern or template for the solution has well-understood prerequisites, structure, properties, behaviour and consequences. CILib is a framework that allows developers to develop new computational intelligence applications quickly and efficiently. Flexibility, reusability and clear separation between components are maximised through the use of design patterns. Reliability is also ensured as the framework is open source and thus has many people that collaborate to ensure that the framework is well designed and error free. This dissertation discusses the design and implementation of a generic neural network framework that allows users to design, implement and use any possible neural network models and algorithms in such a way that they can reuse and be reused by any other computational intelligence algorithm in the rest of the framework, or any external applications. This is achieved by using object-oriented design patterns in the design of the framework. |

Zablocki, F | MSc 2008 University of Pretoria | Multiple Sequence Alignment using Particle Swarm Optimization | The recent advent of bioinformatics has given rise to the central and recurrent problem of optimally aligning biological sequences. Many techniques have been proposed in an attempt to solve this complex problem with varying degrees of success. This thesis investigates the application of a computational intelligence technique known as particle swarm optimization (PSO) to the multiple sequence alignment (MSA) problem. Firstly, the performance of the standard PSO (S-PSO) and its characteristics are fully analyzed. Secondly, a scalability study is conducted that aims at expanding the S-PSO’s application to complex MSAs, as well as studying the behaviour of three other kinds of PSOs on the same problems. Experimental results show that the PSO is efficient in solving the MSA problem and compares positively with well-known CLUSTAL X and T-COFFEE. |

Poggiolini, M | MSc 2009 University of Pretoria | The Feature Detection Rule and its application within the Negative Selection Algorithm | The negative selection algorithm developed by Forrest et al. was inspired by the manner in which T-cell lymphocytes mature within the thymus before being released into the blood system. The resultant T-cell lymphocytes, which are then released into the blood, exhibit an interesting characteristic: they are only activated by non-self cells that invade the human body. The work presented in this thesis examines the current body of research on the negative selection theory and introduces a new affinity threshold function, called the feature-detection rule. The feature-detection rule utilises the inter-relationship between both adjacent and non-adjacent features within a particular problem domain to determine if an artificial lymphocyte is activated by a particular antigen. The performance of the feature-detection rule is contrasted with traditional affinity-matching functions currently employed within negative selection theory, most notably the -chunks rule (which subsumes the -contiguous bits rule) and the hamming-distance rule. The performance will be characterised by considering the detection rate, false-alarm rate, degree of generalisation and degree of overfitting. The thesis will show that the feature-detection rule is superior to the -chunks rule and the hamming-distance rule, in that the feature-detection rule requires a much smaller number of detectors to achieve greater detection rates and less false-alarm rates. The thesis additionally refutes that the way in which permutation masks are currently applied within negative selection theory is incorrect and counterproductive, while placing the feature-detection rule within the spectrum of affinity-matching functions currently employed by artificial immune-system (AIS) researchers. |

Grobler, J | MEng 2009 University of Pretoria | Particle Swarm Optimization and Differential Evolution for Multi-Objective Multiple Machine Scheduling | Production scheduling is one of the most important issues in the planning and operation of manufacturing systems. Customers increasingly expect to receive the right product at the right price at the right time. Various problems experienced in manufacturing, for example low machine utilization and excessive work-in-process, can be attributed directly to inadequate scheduling. In this dissertation a production scheduling algorithm is developed for Optimatix, a South African-based company specializing in supply chain optimization. To address the complex requirements of the customer, the problem was modeled as a flexible job shop scheduling problem with sequence-dependent set-up times, auxiliary resources and production down time. The algorithm development process focused on investigating the application of both particle swarm optimization (PSO) and differential evolution (DE) to production scheduling environments characterized by multiple machines and multiple objectives. Alternative problem representations, algorithm variations and multi-objective optimization strategies were evaluated to obtain an algorithm which performs well against both existing rule-based algorithms and an existing complex flexible job shop scheduling solution strategy. Finally, the generality of the priority-based algorithm was evaluated by applying it to the scheduling of production and maintenance activities at Centurion Ice Cream and Sweets. The production environment was modeled as a multi-objective uniform parallel machine shop problem with sequence-dependent set-up times and unavailability intervals. A self-adaptive modified vector evaluated DE algorithm was developed and compared to classical PSO and DE vector evaluated algorithms. Promising results were obtained with respect to the suitability of the algorithms for solving a range of multi-objective multiple machine scheduling problems. |

Neethling, M | MSc 2009 University of Pretoria | Using SetPSO to Determine RNA Secondary Structure | RNA secondary structure prediction is an important field in Bioinformatics. A number of different approaches have been developed to simplify the determination of RNA molecule structures. RNA is a nucleic acid found in living organisms which fulfils a number of important roles in living cells. Knowledge of its structure is crucial in the understanding of its function. Determining RNA secondary structure computationally, rather than by physical means, has the advantage of being a quicker and cheaper method. This dissertation introduces a new Set-based Particle Swarm Optimisation algorithm, known as SetPSO for short, to optimise the structure of an RNA molecule, using an advanced thermodynamic model. Structure prediction is modelled as an energy minimisation problem. Particle swarm optimisation is a simple but effective stochastic optimisation technique developed by Kennedy and Eberhart. This simple technique was adapted to work with variable length particles which consist of a set of elements rather than a vector of real numbers. The effectiveness of this structure prediction approach was compared to that of a dynamic programming algorithm called mfold. It was found that SetPSO can be used as a combinatorial optimisation technique which can be applied to the problem of RNA secondary structure prediction. This research also included an investigation into the behaviour of the new SetPSO optimisation algorithm. Further study needs to be conducted to evaluate the performance of SetPSO on different combinatorial and set-based optimisation problems. |

Hauptfleisch, A | MSc 2010 University of Pretoria | Automatic Road Extraction from High Resolution Satellite Imagery using Spectral Classification Methods | Road networks play an important role in a number of geospatial applications, such as cartographic, infrastructure planning and traffic routing software. Automatic and semi-automatic road network extraction techniques have significantly increased the extraction rate of road networks. Automated processes still yield some erroneous and incomplete results and costly human intervention is still required to evaluate results and correct errors. With the aim of improving the accuracy of road extraction systems, three objectives are defined in this thesis: Firstly, the study seeks to develop a flexible semi-automated road extraction system, capable of extracting roads from QuickBird satellite imagery. The second objective is to integrate a variety of algorithms within the road network extraction system. The benefits of using each of these algorithms within the proposed road extraction system, is illustrated. Finally, a fully automated system is proposed by incorporating a number of the algorithms investigated throughout the thesis. |

Louis, A | MSc 2010 University of Pretoria | Unsupervised Discovery of Relations for Analysis of Textual Data in Digital Forensics | This dissertation addresses the problem of analysing digital data in digital forensics. It will be shown that text mining methods can be adapted and applied to digital forensics to aid analysts to more quickly, efficiently and accurately analyse data to reveal truly useful information. Investigators who wish to utilise digital evidence must examine and organise the data to piece together events and facts of a crime. The difficulty with finding relevant information quickly using the current tools and methods is that these tools rely very heavily on background knowledge for query terms and do not fully utilise the content of the data. A novel framework in which to perform evidence discovery is proposed in order to reduce the quantity of data to be analysed, aid the analysts’ exploration of the data and enhance the intelligibility of the presentation of the data. The framework combines information extraction techniques with visual exploration techniques to provide a novel approach to performing evidence discovery, in the form of an evidence discovery system. By utilising unrestricted, unsupervised information extraction techniques, the investigator does not require input queries or keywords for searching, thus enabling the investigator to analyse portions of the data that may not have been identified by keyword searches. The evidence discovery system produces text graphs of the most important concepts and associations extracted from the full text to establish ties between the concepts and provide an overview and general representation of the text. Through an interactive visual interface the investigator can explore the data to identify suspects, events and the relations between suspects. Two models are proposed for performing the relation extraction process of the evidence discovery framework. The first model takes a statistical approach to discovering relations based on co-occurrences of complex concepts. The second model utilises a linguistic approach using named entity extraction and information extraction patterns. A preliminary study was performed to assess the usefulness of a text mining approach to digital forensics as against the traditional information retrieval approach. It was concluded that the novel approach to text analysis for evidence discovery presented in this dissertation is a viable and promising approach. The preliminary experiment showed that the results obtained from the evidence discovery system, using either of the relation extraction models, are sensible and useful. The approach advocated in this dissertation can therefore be successfully applied to the analysis of textual data for digital forensics. |

Barla-Szabo, D | MSc 2010 University of Pretoria | A Study of Grandient Based Particle Swarm Optimisers | Gradient-based optimisers are a natural way to solve optimisation problems, and have long been used for their efficacy in exploiting the search space. Particle swarm optimisers (PSOs), when using reasonable algorithm parameters, are considered to have good exploration characteristics. This thesis proposes a specific way of constructing hybrid gradient PSOs. Heterogeneous, hybrid gradient PSOs are constructed by allowing the gradient algorithm to optimise local best particles, while the PSO algorithm governs the behaviour of the rest of the swarm. This approach allows the distinct algorithms to concentrate on performing the separate tasks of exploration and exploitation. Two new PSOs, the Gradient Descent PSO, which combines the Gradient Descent and PSO algorithms, and the LeapFrog PSO, which combines the LeapFrog and PSO algorithms, are introduced. The GDPSO represents arguably the simplest hybrid gradient PSO possible, while the LeapFrog PSO incorporates the more sophisticated LFOP1(b) algorithm, exhibiting a heuristic algorithm design and dynamic time step adjustment mechanism. The strong tendency of these hybrids to prematurely converge is examined, and it is shown that by modifying algorithm parameters and delaying the introduction of gradient information, it is possible to retain strong exploration capabilities of the original PSO algorithm while also benefiting from the exploitation of the gradient algorithms. |

Dean E | MSc 2010 University of Pretoria | Computer Aided Identification of Biological Specimens using Self-Organizing Maps | For scientific or socio-economic reasons it is often necessary or desirable that biological material be identified. Given that there are an estimated 10 million living organisms on Earth, the identification of biological material can be problematic. Consequently the services of taxonomist specialists are often required. However, if such expertise is not readily available it is necessary to attempt an identification using an alternative method. Some of these alternative methods are unsatisfactory or can lead to a wrong identification. One of the most common problems encountered when identifying specimens is that important diagnostic features are often not easily observed, or may even be completely absent. A number of techniques can be used to try to overcome this problem, one of which, the Self Organizing Map (or SOM), is a particularly appealing technique because of its ability to handle missing data. This thesis explores the use of SOMs as a technique for the identification of indigenous trees of the Acacia species in KwaZulu-Natal, South Africa. The ability of the SOM technique to perform exploratory data analysis through data clustering is utilized and assessed, as is its usefulness for visualizing the results of the analysis of numerical, multivariate botanical data sets. The SOM’s ability to investigate, discover and interpret relationships within these data sets is examined, and the technique’s ability to identify tree species successfully is tested. These data sets are also tested using the C5 and CN2 classification techniques. Results from both these techniques are compared with the results obtained by using a SOM commercial package. These results indicate that the application of the SOM to the problem of biological identification could provide the start of the long-awaited breakthrough in computerized identification that biologists have eagerly been seeking. |

Papacostantis, E | MSc 2010 University of Pretoria | Competitive Co-Evolution of Trend Reversal Indicators using Particle Swarm Optimisation | Computational Intelligence has found a challenging testbed for various paradigms in the financial sector. Extensive research has resulted in numerous financial applications using neural networks and evolutionary computation, mainly genetic algorithms and genetic programming. More recent advances in the field of computational intelligence have not yet been applied as extensively or have not become available in the public domain, due to the confidentiality requirements of financial institutions. This study investigates how co-evolution together with the combination of particle swarm optimisation and neural networks could be used to discover competitive security trading agents that could enable the timing of buying and selling securities to maximise net profit and minimise risk over time. The investigated model attempts to identify security trend reversals with the help of technical analysis methodologies. Technical market indicators provide the necessary market data to the agents and reflect information such as supply, demand, momentum, volatility, trend, sentiment and retracement. All this is derived from the security price alone, which is one of the strengths of technical analysis and the reason for its use in this study. The model proposed in this thesis evolves trading strategies within a single population of competing agents, where each agent is represented by a neural network. The population is governed by a competitive co-evolutionary particle swarm optimisation algorithm, with the objective of optimising the weights of the neural networks. A standard feed forward neural network architecture is used, which functions as a market trend reversal confidence. Ultimately, the neural network becomes an amalgamation of the technical market indicators used as inputs, and hence is capable of detecting trend reversals. Timely trading actions are derived from the confidence output, by buying and short selling securities when the price is expected to rise or fall respectively. No expert trading knowledge is presented to the model, only the technical market indicator data. The co-evolutionary particle swarm optimisation model facilitates the discovery of favourable technical market indicator interpretations, starting with zero knowledge. A competitive fitness function is defined that allows the evaluation of each solution relative to other solutions, based on predefined performance metric objectives. The relative fitness function in this study considers net profit and the Sharpe ratio as a risk measure. For the purposes of this study, the stock prices of eight large market capitalisation companies were chosen. Two benchmarks were used to evaluate the discovered trading agents, consisting of a Bollinger Bands/Relative Strength Index rule-based strategy and the popular buy-and-hold strategy. The agents that were discovered from the proposed hybrid computational intelligence model outperformed both benchmarks by producing higher returns for in-sample and out-sample data at a low risk. This indicates that the introduced model is effective in finding favourable strategies, based on observed historical security price data. Transaction costs were considered in the evaluation of the computational intelligent agents, making this a feasible model for a real-world application. |

Smit, M | MSc 2010 University of Pretoria | Interactive Narrative Generation using Computational Verb Theory | Interactive narrative extends traditional story-telling techniques by enabling previously passive observers to become active participants in the narrative events that unfold. A variety of approaches have attempted to construct such interactive narrative spaces and reconcile the goals of interactivity and dramatic story-telling. With the advent of the linguistic variable in 1972, a means was established for modelling natural language words and phrases mathematically and computationally. Over the past decade, the computational verb, first introduced in 1997, has been developed as a mathematical means of modelling natural language verbs in terms of dynamic systems, and vice versa. Computational verb theory extends the initial concept of the linguistic variable beyond being able to model adjectives, nouns, and passive states, into the realm of actions as denoted by natural language verbs. This thesis presents the framework and implementation of a system that generates interactive narrative spaces from narrative text. The concept of interactive narrative is introduced and recent developments in the area of interactive narrative are discussed. Secondly, a brief history of the development of the linguistic variable and the computational verb are provided. With the context of the computational verb (interactive) narrative generation (CVTNG) system presented, the underlying theoretical principles of the system are established. The CVTNG system principles are described in terms of fuzzy set, computational verb, and constraint satisfaction theory. The fuzzy set, computational verb, and constraint satisfaction principles are organised according to a CVTNG architecture. The CVTNG architecture is then described in terms of its subsystems, structures, algorithms, and interfaces. Each CVTNG system component is related to the overall design considerations and goals. A prototype of the CVTNG system is implemented and tested against a suite of natural language sentences. The behaviour and performance of the CVTNG system prototype are discussed in relation to the CVTNG system’s design principles. Results are calculated and stored as variable values that are dynamically and generically associated with representational means, specifically computer graphics, to illustrate the generation of interactive narrative spaces. Plans for future work are discussed to show the immense development potential of this application. The thesis concludes that the CVTNG system provides a solid and extendable base for the intuitive generation of interactive narrative spaces from narrative text, computational verb models, and freely associated media. |

Anguelov, B | MSc 2012 University of Pretoria | Video Game Pathfinding and Improvements to Discrete Search on Grid-based Maps | The most basic requirement for any computer controlled game agent in a video game is to be able to successfully navigate the game environment. Pathfinding is an essential component of any agent navigation system. Pathfinding is, at the simplest level, a search technique for finding a route between two points in an environment. The real-time multi-agent nature of video games places extremely tight constraints on the pathfinding problem. This study aims to provide the first complete review of the current state of video game pathfinding both in regards to the graph search algorithms employed as well as the implications of pathfinding within dynamic game environments. Furthermore this thesis presents novel work in the form of a domain specific search algorithm for use on grid-based game maps: the spatial grid A* algorithm which is shown to offer significant improvements over A* within the intended domain. |

Duhain, JGOL | MSc 2012 University of Pretoria | Particle Swarm Optimization in Dynamically Changing Environments | Real-world optimisation problems often are of a dynamic nature. Recently, much research has been done to apply particle swarm optimisation (PSO) to dynamic environments (DE). However, these research efforts generally focused on optimising one variation of the PSO algorithm for one type of DE. The aim of this work is to develop a more comprehensive view of PSO for DEs. This thesis studies different schemes of characterising and taxonomising DEs, performance measures used to quantify the performance of optimisation algorithms applied to DEs, various adaptations of PSO to apply PSO to DEs, and the effectiveness of these approaches on different DE types. The standard PSO algorithm has shown limitations when applied to DEs. To overcome these limitations, the standard PSO can be modified using personal best re-evaluation, change detection and response, diversity maintenance, or swarm sub-division and parallel tracking of optima. To investigate the strengths and weaknesses of these approaches, a representative sample of algorithms, namely, the standard PSO, re-evaluating PSO, reinitialising PSO, atomic PSO (APSO), quantum swarm optimisation (QSO), multi-swarm, and self-adapting multi-swarm (SAMS), are empirically analysed. These algorithms are analysed on a range of DE test cases, and their ability to detect and track optima are evaluated using performance measures designed for DEs. The experiments show that QSO, multi-swarm and reinitialising PSO provide the best results. However, the most effective approach to use depends on the dimensionality, modality and type of the DEs, as well as on the objective of the algorithm. A number of observations are also made regarding the behaviour of the swarms, and the influence of certain control parameters of the algorithms evaluated. |

Pampara, G | MSc 2012 University of Pretoria | Angle Modulated Population based Algorithms to Solve Binary Problems | Recently, continuous-valued optimization problems have received a great amount of focus, resulting in optimization algorithms which are very efficient within the continuous-valued space. Many optimization problems are, however, defined within the binary-valued problem space. These continuous-valued optimization algorithms can not operate directly on a binary-valued problem representation, without algorithm adaptations because the mathematics used within these algorithms generally fails within a binary problem space. Unfortunately, such adaptations may alter the behavior of the algorithm, potentially degrading the performance of the original continuous-valued optimization algorithm. Additionally, binary representations present complications with respect to increasing problem dimensionality, interdependencies between dimensions, and a loss of precision. This research investigates the possiblity of applying continuous-valued optimization algorithms to solve binary-valued problems, without requiring algorithm adaptation. This is achieved through the application of a mapping technique, known as angle modulation. Angle modulation effectively addresses most of the problems associated with the use of a binary representation by abstracting a binary problem into a four-dimensional continuous-valued space, from which a binary solution is then obtained. The abstraction is obtained as a bit-generating function produced by a continuous-vaued algorithm. A binary solution is then obtained by sampling the bit-generating function. This thesis proposes a number of population-based angle-modulated continuous-valued algorithms to solve binary-valued problems. These algorithms are then compared to binary algorithm counterparts, using a suite of benchmark functions. Empirical analysis will show that the angle-modulated continuous-valued algorithms are viable alternatives to binary optimization algorithms. |

Rakitianskaia, AS | MSc 2012 University of Pretoria | Using Particle Swarm Optimisation to Train Feedforward Neural Networks in Dynamic Environments | The feedforward neural network (NN) is a mathematical model capable of representing any non-linear relationship between input and output data. It has been succesfully applied to a wide variety of classification and function approximation problems. Various neural network training algorithms were developed, including the particle swarm optimiser (PSO), which was shown to outperform the standard back propagation training algorithm on a selection of problems. However, it was usually assumed that the environment in which a NN operates is static. Such an assumption is often not valid for real life problems, and the training algorithms have to be adapted accordingly. Various dynamic versions of the PSO have already been developed. This work investigates the applicability of dynamic PSO algorithms to NN training in dynamic environments, and compares the performance of dynamic PSO algorithms to the performance of back propagation. Three popular dynamic PSO variants are considered. The extent of adaptive properties of back propagation and dynamic PSO under different kinds of dynamic environments is determined. Dynamic PSO is shown to be a viable alternative to back propagation, especially under the environments exhibiting infrequent gradual changes. |

Morkel, T | MSc 2012 University of Pretoria | Image Steganography Applications for Secure Communication | To securely communicate information between parties or locations is not an easy task considering the possible attacks or unintentional changes that can occur during communication. Encryption is often used to protect secret information from unauthorised access. Encryption, however, is not inconspicuous and the observable exchange of encrypted information between two parties can provide a potential attacker with information on the sender and receiver(s). The presence of encrypted information can also entice a potential attacker to launch an attack on the secure communication. This dissertation investigates and discusses the use of image steganography, a technology for hiding information in other information, to facilitate secure communication. communication is divided into three categories: Secure self-communication, one-to-one communication and one-to-many communication, depending on the number of receivers. In this dissertation, applications that make use of image steganography are implemented for each of the secure communication categories. For self-communication, image steganography is used to hide one-time passwords (OTPs) in images that are stored on a mobile device. For one-to-one communication, a decryptor program that forms part of an encryption protocol is embedded in an image using image steganography and for one-to-many communication, a secret message is divided into pieces and different pieces are embedded in different images. The image steganography applications for each of the secure communication categories are discussed along with the advantages and disadvantages that the applications have over more conventional secure communication technologies. An additional image steganography application is proposed that determines whether information is modified during communication. |

Engelbrecht, N | MEng 2013 University of Pretoria | Analysis of RED Packet Loss Performance in a Simulated IP WAN | The Internet supports a diverse number of applications, which have different requirements for a number of services. Next generation networks provide high speed connectivity between hosts, which leaves the service provider to configure network devices appropriately, in order to maximize network performance. Service provider settings are based on best recommendation parameters, which give an opportunity to optimize these settings even further. This dissertation focuses on a packet discarding algorithm, known as random early detection (RED), to determine parameters which will maximize utilization of a networking resource. The two dominant traffic protocols used across an IP backbone are user datagram protocol (UDP) and transmission control protocol (TCP). UDP traffic flows transmit packets regardless of network conditions, dropping packets without changing its transmission rates. However, TCP traffic flows concern itself with the network condition, reducing the packet transmission rate based on packet loss. Packet loss indicates that a network is congested. The sliding window concept, also known as the TCP congestion window, adjusts to the number of acknowledgements the source node receives from the destination node. This paradigm provides a means to transmit data across the available bandwidth across a network. A well known and widely implemented simulation environment, the network simulator 2 (NS2), was used to analyse the RED mechanism. The NS2 software gained its popularity as being a complex networking simulation tool. Network protocol traffic (UDP and TCP) characteristics comply with theory, which verifies that the traffic generated by this simulator is valid. It is shown that the autocorrelation function differs between these two traffic types, verifying that the generated traffic does conform to theoretical and practical results. UDP traffic has a short-range dependency while TCP traffic has a long-range dependency. Simulation results show the effects of the RED algorithm on network traffic and equipment performance. It is shown that random packet discarding improves source transmission rate stabilization, as well as node utilization. If the packet dropping probability is set high, the TCP source transmission rates are low, but a low packet drop probability provides high transmission rates to a few sources and low transmission rates to the majority of other sources. Therefore, an ideal packet drop probability was obtained to complement TCP source transmission rates and node utilization. Statistical distributions were modelled according to sampled data from the simulations, which also show improvements to the network with random packet discarding. The results obtained contribute to congestion control across wide area networks. Even though a number of queuing management implementations exists, RED is the most widely used implementation used by service providers. |

Scheepers, C | MSc 2014 University of Pretoria | Co-evolution of Neuro-Controllers to Train Multi-Agent Teams from Zero Knowledge | After the historic chess match between Deep Blue and Garry Kasparov, many researchers considered the game of chess solved and moved on to the more complex game of soccer. Artificial intelligence research has shifted focus to creating artificial players capable of mimicking the task of playing soccer. A new training algorithm is presented in this thesis for training teams of players from zero knowledge, evaluated on a simplified version of the game of soccer. The new algorithm makes use of the charged particle swarm optimiser as a neural network trainer in a coevolutionary training environment. To counter the lack of domain information a new relative fitness measure based on the FIFA league-ranking system was developed. The function provides a granular relative performance measure for competitive training. Gameplay strategies that resulted from the trained players are evaluated. It was found that the algorithm successfully trains teams of agents to play in a cooperative manner. Techniques developed in this study may also be widely applied to various other artificial intelligence fields. |

Clegorn CW | MSc 2014 University of Pretoria | A Generalized Theoretical Deterministic Particle Swarm Model | Particle swarm optimization (PSO) is a well known population-based search algorithm, originally developed by Kennedy and Eberhart in 1995. The PSO has been utilized in a variety of application domains, providing a wealth of empirical evidence for its effectiveness as an optimizer. The PSO itself has undergone many alterations subsequent to its inception, some of which are fundamental to the PSO’s core behavior, others have been more application specific. The fundamental alterations to the PSO have to a large extent been a result of theoretical analysis of the PSO’s particle’s long term trajectory. The most obvious example, is the need for velocity clamping in the original PSO. While there were empirical findings that suggested that each particle’s velocity was increasing at a rapid rate, it was only once a solid theoretical study was performed that the reason for the velocity explosion was understood. There has been a large amount of theoretical research done on the PSO, both for the deterministic model, and more recently for the stochastic model. This thesis presents an extension to the theoretical deterministic PSO model. Under the extended model, conditions for particle convergence to a point are derived. At present all theoretical PSO research is done under the stagnation assumption, in some form or another. The analysis done under the stagnation assumption is one where the personal best and neighborhood best are assumed to be non-changing. While analysis under the stagnation assumption is very informative, it could never provide a complete description of a PSO’s behavior. Furthermore, the assumption implicitly removes the notion of a social network structure from the analysis. The model used in this thesis greatly weakens the stagnation assumption, by instead assuming that each particle’s personal best and neighborhood best can occupy an arbitrarily large number of unique positions. Empirical results are presented to support the theoretical findings. |

Georgieva, KS | MSc 2015 University of Pretoria | A Computational Intelligence Approach to Clustering of Temporal Data | Temporal data is common in real-world datasets. Analysis of such data, for example by means of clustering algorithms, can be difficult due to its dynamic behaviour. There are various types of changes that may occur to clusters in a dataset. Firstly, data patterns can migrate between clusters, shrinking or expanding the clusters. Additionally, entire clusters may move around the search space. Lastly, clusters can split and merge. Data clustering, which is the process of grouping similar objects, is one approach to determine relationships among data patterns, but data clustering approaches can face limitations when applied to temporal data, such as difficulty tracking the moving clusters. This research aims to analyse the ability of particle swarm optimisation (PSO) and differential evolution (DE) algorithms to cluster temporal data. These algorithms experience two weaknesses when applied to temporal data. The first weakness is the loss of diversity, which refers to the fact that the population of the algorithm converges, becoming less diverse and, therefore, limiting the algorithm’s exploration capabilities. The second weakness, outdated memory, is only experienced by the PSO and refers to the previous personal best solutions found by the particles becoming obsolete as the environment changes. A data clustering algorithm that addresses these two weaknesses is necessary to cluster temporal data. This research describes various adaptations of PSO and DE algorithms for the purpose of clustering temporal data. The algorithms proposed aim to address the loss of diversity and outdated memory problems experienced by PSO and DE algorithms. These problems are addressed by combining approaches previously used for the purpose of dealing with temporal or dynamic data, such as repulsion and anti-convergence, with PSO and DE approaches used to cluster data. Six PSO algorithms are introduced in this research, namely the data clustering particle swarm optimisation (DCPSO), reinitialising data clustering particle swarm optimisation (RDCPSO), cooperative data clustering particle swarm optimisation (CDCPSO), multi-swarm data clustering particle swarm optimisation (MDCPSO), cooperative multi-swarm data clustering particle swarm optimisation (CMDCPSO), and elitist cooperative multi-swarm data clustering particle swarm optimisation (eCMDCPSO). Additionally, four DE algorithms are introduced, namely the data clustering differential evolution (DCDE), re-initialising data clustering differential evolution (RDCDE), dynamic data clustering differential evolution (DCDynDE), and cooperative dynamic data clustering differential evolution (CDCDynDE). The PSO and DE algorithms introduced require prior knowledge of the total number of clusters in the dataset. The total number of clusters in a real-world dataset, however, is not always known. For this reason, the best performing PSO and best performing DE are compared. The CDCDynDE is selected as the winning algorithm, which is then adapted to determine the optimal number of clusters dynamically. The resulting algorithm is the k-independent cooperative data clustering differential evolution (KCDCDynDE) algorithm, which was compared against the local network neighbourhood artificial immune system (LNNAIS) algorithm, which is an artificial immune system (AIS) designed to cluster temporal data and determine the total number of clusters dynamically. It was determined that the KCDCDynDE performed the clustering task well for problems with frequently changing data, high-dimensions, and pattern and cluster data migration types. |

Stallmann, CF | MSc 2015 University of Pretoria | Digital Audio Restoration of Gramophone Records | Gramophones were the main audio recording medium for more than seven decades and regained widespread popularity over the past few years. Being an analogue storage medium, gramophone records are subject to distortions caused by scratches, dust particles, high temperatures, excessive playback and other noise induced by mishandling the record. Due to the early adoption of the compact disc and other digital audio mediums, most research to reduce the noise on gramophone records focused on physical improvements such as the enhancements of turntable hardware, amelioration of the record material or advances through better record cutting techniques. Comparatively little research has been conducted to digitally filter and reconstruct distorted gramophone recordings. This thesis provides an extensive analysis on the digital detection and reconstruction of noise in gramophone audio signals distorted by scratches. The ability to approximated audio signals was examined though an empirical analysis of different polynomials and time series models. The investigated polynomials include the standard, Fourier, Newton, Lagrange, Hermite, osculating and piecewise polynomials. Experiments were also conducted by applying autoregressive, moving average and heteroskedasticity models, such as the AR, MA, ARMA, ARIMA, ARCH and GARCH models. In addition, different variants of an artificial neural network were tested and compared to the time series models. Noise detection was performed using algorithms based on the standard score, median absolute deviation, Mahalanobis distance, nearest neighbour, mean absolute spectral deviation and the absolute predictive deviation method. The reconstruction process employed the examined polynomials and models and also considered adjacent window, mirroring window, nearest neighbour, similarity, Lanczos and cosine interpolation. The detection and reconstruction algorithms were benchmarked using a dataset of 800 songs from eight different genres. Simulations were conducted using artificially generated and real gramophone noise. The algorithms were compared according to their detection and reconstruction accuracy, the computational time needed and the tradeoff between the accuracy and time. Empirical analysis showed that the highest noise detection accuracy was achieved with the absolute predictive deviation using an ARIMA model. The predictive outlier detector employing a Jordan simple recurrent artificial neural network was most efficient by achieving the best detection rate in a limited timespan. It was also found that the artificial neural networks reconstructed the audio signals more accurately than the other interpolation algorithms. The AR model was most efficient by achieving the best tradeoff between the execution time and the interpolation error. |

Van Wyk, AB | MSc 2015 University of Pretoria | An Analysis of Overfitting in Particle Swarm Optimized Neural Networks | The phenomenon of overfitting, where a feed-forward neural network (FFNN) over trains on training data at the cost of generalisation accuracy is known to be specific to the training algorithm used. This study investigates overfitting within the context of particle swarm optimised (PSO) FFNNs. Two of the most widely used PSO algorithms are compared in terms of FFNN accuracy and a description of the overfitting behaviour is established. Each of the PSO components are in turn investigated to determine their effect on FFNN overfitting. A study of the maximum velocity (Vmax ) parameter is performed and it is found that smaller V max values are optimal for FFNN training. The analysis is extended to the inertia and acceleration coefficient parameters, where it is shown that specific interactions among the parameters have a dominant effect on the resultant FFNN accuracy and may be used to reduce overfitting. Further, the significant effect of the swarm size on network accuracy is also shown, with a critical range being identified for the swarm size for effective training. The study is concluded with an investigation into the effect of the different activation functions. Given strong empirical evidence, an hypothesis is made that stating the gradient of the activation function significantly affects the convergence of the PSO. Lastly, the PSO is shown to be a very effective algorithm for the training of self-adaptive FFNNs, capable of learning from unscaled data. |

Langeveld, J | MSc 2015 University of Pretoria | Set-Based Particle Swarm Optimization | Particle swarm optimization (PSO) algorithms have been successfully applied to discrete-valued optimization problems. However, in many cases the algorithms have been tailored specifically for the problem at hand. This study proposes a generic set-based particle swarm optimization algorithm, called SBPSO, for use on discrete-valued optimization problems that can be formulated as set-based problems. The performance of the SBPSO is then evaluated on two different discrete optimization problems: the multidimensional knapsack problem (MKP) and the feature selection problem (FSP) from machine learning. In both cases, the SBPSO is compared to three other discrete PSO algorithms from literature. On the MKP, the SBPSO is shown to outperform, with statistical significance, the other algorithms. On the FSP and using a k-nearest neighbor classifier, the SBPSO is shown to outperform, with statistical significance, the other algorithms. When a Gaussian Naive Bayes or a J48 decision tree classifier is used, no algorithm can be shown to outperform on the FSP. |

Klazar, R | MSc 2016 University of Pretoria | Ant-Inspired Strategies for Opportunistic Load Balancing in the Distributed Computation of Solutions to Embarrassingly Parallel Problems | Computational science is a practice that requires a large amount of computing time. One means of providing the required computing time is to construct a distributed computing system that utilises the ordinary desktop computers found within an organisation. However, when the constituent computers do not all perform computations at the same speed, the overall completion time of a project involving the execution of tasks by all of the computers in the system becomes dependent on the performance of the slowest computer in the network. This study proposes two ant-inspired algorithms for dynamic task allocation that aim to overcome the aforementioned dependency. A procedure for tuning the free parameters of the algorithms is specified and the algorithms are evaluated for their viability in terms of their effect on the overall completion time of tasks as well as their usage of bandwidth in the network. |

Naidoo, T | MSc 2017 University of Pretoria | Reconstruction and Analysis of Holograhic Images for Lens-less Microscopy | The digital in-line holographic configuration is motivated by the goal of developing a portable, cost effective sensor system for pre-screening patient blood samples. The theory of holography is explained from the foundational concepts in scalar diffraction theory all the way through to the implementation of reconstruction algorithms. Methods for the enhancement of holographic reconstructions are described. The algorithms that perform an automated count of the reconstructed objects are described and demonstrated. Simulated and experimental results are provided. Together, the lens-free holographic microscopy of micro-sized particles along with the application of image processing techniques for the automated detection and counting of objects of interest, provide a component towards realising a sensor system that can be used for pre-screening patient blood samples. |

Van Heerden, WS | MSc 2017 University of Pretoria | Self-Organizing Feature Maps for Exploratory Data Analysis and Data Mining: A Practical Perspective | The self-organizing feature map (SOM) is an unsupervised machine learning approach that offers extremely useful data modeling and visualization abilities. The approach has been successfully employed in order to solve a wide variety of problems. Exploratory data analysis (EDA) and data mining (DM) are both fields that attempt to algorithmically extract insightful knowledge from a data set, with a greater or lesser level of human assistance. This work overviews the SOM algorithm, with a focus on EDA and DM applications. Supervised and unsupervised neuron labeling methods for SOMs, which are an integral part of most SOM-based EDA and DM exercises, are investigated. Existing SOM-based EDA and DM approaches are also critically compared. A novel unsupervised neuron labeling approach, namely unsupervised weight-based cluster labeling, is introduced. This research also proposes HybridSOM, a novel hybrid framework for DM-oriented rule extraction, which combines a SOM with an arbitrary rule extraction algorithm. Two empirical investigations are reported. Firstly, existing supervised neuron labeling techniques are experimentally compared. Secondly, the HybridSOM framework is empirically compared to several existing DM rule extractors. |

Leonard, BJ | MSc 2017 University of Pretoria | Critical Analysis of Angle Modulated Particle Swarm Optimisers | This dissertation presents an analysis of the angle modulated particle swarm optimisation (AMPSO) algorithm. AMPSO is a technique that enables one to solve binary optimisation problems with particle swarm optimisation (PSO), without any modifications to the PSO algorithm. While AMPSO has been successfully applied to a range of optimisation problems, there is little to no understanding of how and why the algorithm might fail. The work presented here includes in-depth theoretical and emprical analyses of the AMPSO algorithm in an attempt to understand it better. Where problems are identified, they are supported by theoretical and/or empirical evidence. Furthermore, suggestions are made as to how the identified issues could be overcome. In particular, the generating function is identified as the main cause for concern. The generating function in AMPSO is responsible for generating binary solutions. However, it is shown that the increasing frequency of the generating function hinders the algorithm’s ability to effectively exploit the search space. The problem is addressed by introducing methods to construct different generating functions, and to quantify the quality of arbitrary generating functions. In addition to this, a number of other problems are identified and addressed in various ways. The work concludes with an empirical analysis that aims to identify which of the various suggestions made throughout this dissertatioin hold substantial promise for further research. |

Cian Steenkamp | MSc 2021 Stellenbosch University | Multi-guided Particle Swarm Optimization for Many-objective Optimization Problems | My research proposes to investigate the scalability of the multi-guided particle swarm optimization (MGPSO) algorithm. That is scalability only with regards to the number of objectives. Many-objective optimization problems (MaOPs) have four or more objectives to be optimized, whereas multi-objective problems (MOPs) have at most three objectives. The MGPSO has been shown to be suitable for effectively solving MOPs, but no research has been done on the scalability of the MGPSO to solve MaOPs. Therefore, my research will initially compare the performance of the MGPSO with that of other many-objective optimization (MaOO) algorithms on MaOPs consisting of 3, 5, 8, 10, and 15 objectives. Then mechanisms will be proposed and implemented to allow the MGPSO to scale and efficiently solve MaOPs. The benchmark problems used in my study include test functions from the Walking Fish Group (WFG) and Deb-Thiele-Laumanns-Zitzler (DTLZ) test suites. The empirical results will be compared by applying performance measures and statistical tests. |

## Doctoral Students¶

Student Information | Degree | Thesis Title | Abstract |
---|---|---|---|

Van den Bergh, F | PhD 2002 University of Pretoria | An Analysis of Particle Swarm Optimizers | Many scientific, engineering and economic problems involve the optimisation of a set of parameters. These problems include examples like minimising the losses in a power grid by finding the optimal configuration of the components, or training a neural network to recognise images of people’s faces. Numerous optimisation algorithms have been proposed to solve these problems, with varying degrees of success. The Particle Swarm Optimiser (PSO) is a relatively new technique that has been empirically shown to perform well on many of these optimisation problems. This thesis presents a theoretical model that can be used to describe the long-term behaviour of the algorithm. An enhanced version of the Particle Swarm Optimiser is constructed and shown to have guaranteed convergence on local minima. This algorithm is extended further, resulting in an algorithm with guaranteed convergence on global minima. A model for constructing cooperative PSO algorithms is developed, resulting in the introduction of two new PSO-based algorithms. Empirical results are presented to support the theoretical properties predicted by the various models, using synthetic benchmark functions to investigate specific properties. The various PSO-based algorithms are then applied to the task of training neural networks, corroborating the results obtained on the synthetic benchmark functions. |

Omran, M | PhD 2005 University of Pretoria | Particle Swarm Optimization Methods for Pattern Recognition and Image Processing | Pattern recognition has as its objective to classify objects into different categories and classes. It is a fundamental component of artificial intelligence and computer vision. This thesis investigates the application of an efficient optimization method, known as Particle Swarm Optimization (PSO), to the field of pattern recognition and image processing. First a clustering method that is based on PSO is proposed. The application of the proposed clustering algorithm to the problem of unsupervised classification and segmentation of images is investigated. A new automatic image generation tool tailored specifically for the verification and comparison of various unsupervised image classification algorithms is then developed. A dynamic clustering algorithm which automatically determines the "optimum" number of clusters and simultaneously clusters the data set with minimal user interference is then developed. Finally, PSO-based approaches are proposed to tackle the color image quantization and spectral unmixing problems. In all the proposed approaches, the influence of PSO parameters on the performance of the proposed algorithms is evaluated. |

Rodic, D | PhD 2005 University of Pretoria | Intelligent Distributed Agent Based Architecture | This thesis presents work done on the development of a multi-agent system architecture that facilitates coordination and a novel social networks based approach to coordination. The field of multi-agent system research is undergoing tremendous expansion and it would be impossible to address all the issues related to the field. Instead, this thesis focuses on the coordination aspect of multi-agent systems. The architecture presented here is named the INtelligent Distributed Agent Based Architecture, INDABA. INDABA, as a hybrid agent architecture, combines the subsymbolic knowledge representation layered architecture with a symbolic layer that allows for deliberative reasoning and learning. INDABA also introduces a layer that facilitates coordination in a society of agents, namely the interaction layer. The new approach to coordination was inspired by social networks, as observed in higher mammalian societies. Two social relationships were explored, namely kinship and trust. Coordination is achieved through team selection. Using characteristics of social networks, such as learning and the ability to deal with uncertainties, the best team is selected for task execution. The experiments conducted for the purpose of this thesis were done on three levels. Firstly, an abstract simulated environment was created where a society of a large number of agents could be observed. Secondly, experiments were done in a more realistic simulated robot environment. The last set of experiments was done in a real-world environment, with the implementation of INDABA in embodied mobile agents (robots). The experiments have confirmed the applicability of INDABA as an agent architecture, as well as the validity of the social networks coordination approach. |

Khan, S | PhD 2009 University of Pretoria | Design and Analysis of Iterative Heuristics for Topology Design of Distributive Local Area Networks | Topology design of distributed local area networks (DLANs) can be classified as an NP-hard problem. Intelligent algorithms, such as evolutionary and swarm intelligence techniques, are candidate approaches to address this problem and to produce desirable solutions. DLAN topology design consists of several conflicting objectives such as minimization of cost, minimization of network delay, minimization of the number of hops between two nodes, and maximization of reliability. It is possible to combine these objectives in a single-objective function, provided that the trade-offs among these objectives are adhered to. This thesis proposes a strategy and a new aggregation operator based on fuzzy logic to combine the four objectives in a single-objective function. The thesis also investigates the use of a number of evolutionary algorithms such as stochastic evolution, simulated evolution, and simulated annealing. A number of hybrid variants of the above algorithms are also proposed. Furthermore, the applicability of swarm intelligence techniques such as ant colony optimization and particle swarm optimization to topology design has been investigated. All proposed techniques have been evaluated empirically with respect to their algorithm parameters. Results suggest that simulated annealing produced the best results among all proposed algorithms. In addition, the hybrid variants of simulated annealing, simulated evolution, and stochastic evolution generated better results than their respective basic algorithms. Moreover, a comparison of ant colony optimization and particle swarm optimization shows that the latter generated better results than the former. |

Graaff, A | PhD 2011 University of Pretoria | A Local Network Neighbourhood Artificial Immune System | As information is becoming more available online and will forevermore be part of any business, the true value of the large amounts of stored data is in the discovery of hidden and unknown relations and connections or traits in the data. The acquisition of these hidden relations can influence strategic decisions which have an impact on the success of a business. Data clustering is one of many methods to partition data into different groups in such a way that data patterns within the same group share some common trait compared to patterns across different groups. This thesis proposes a new artificial immune model for the problem of data clustering. The new model is inspired by the network theory of immunology and differs from its network based predecessor models in its formation of artificial lymphocyte networks. The proposed model is first applied to data clustering problems in stationary environments. Two different techniques are then proposed which enhances the proposed artificial immune model to dynamically determine the number of clusters in a data set with minimal to no user interference. A technique to generate synthetic data sets for data clustering of non-stationary environments is then proposed. Lastly, the original proposed artificial immune model and the enhanced version to dynamically determine the number of clusters are then applied to generated synthetic non-stationary data clustering problems. The influence of the parameters on the clustering performance is investigated for all versions of the proposed artificial immune model and supported by empirical results and statistical hypothesis tests. |

Schoeman, IL | PhD 2011 University of Pretoria | Niching in Particle Swarm Optimization | Optimization forms an intrinsic part of the design and implementation of modern systems, such as industrial systems, communication networs, and the configuration of electric or electronic components. Population-based single-solution optimization algorithms such as particle swarm optimization (PSO) have been shown to perform well when a number of optimal or suboptimal solutions exist. However, some problems require algorithms that locate all or most of these optimal and suboptimal solutions. Such algorithms are known as niching or speciation algorithms. Several techniques have been proposed to extend the PSO paradaigm so that multiple optima can be located and maintained within a convoluted search space. A significant number of these implementations are subswarm-based, that is, portions of the swarm are optimized separately. Niches are formed to contain these subswarms, a process that often requires user-specified parameters, as the sizes and placing of the niches are unknown. This thesis presents a new niching technique that uses the vector dot product of the social and cognitive direction vectors to determine niche boundaries. Thus, a separate niche radius is calculated for each niche, a process that requires minimal knowledge of the search space. This strategy differs from other techniques using niche radii where a niche radius is either required to be set in advance, or calculated from the distances between particles. The development of the algorithm is traced and tested extensively using synthetic benchmark functions. Empirical results are reported using a variety of settings. An analysis of the algorithms is presented as well as a scalability study. The performance of the algorithm is also compared to that of the two other wll-known PSO niching algorithms. To conclude, the vector-based PSO is extended to locate and track multiple optima in dynamic environments. |

Lutu, PEN | PhD 2011 University of Pretoria | Dataset Selection for Aggregate Model Implementation in Predictive Data Mining | This thesis addresses the problem of dataset selection for predictive data mining. Dataset selection was studied in the context of aggregate modeling for classification. The central argument of this thesis is that, for predictive data mining, it is possible to systematically select many dataset samples and employ different approaches (different from current practice) to feature selection, training dataset selection, and model construction. When a large amount of information in a large dataset is utilised in the modeling process, the resulting models will have a high level of predictive performance and should be more reliable. Aggregate classification models, also known as ensemble classifiers, have been shown to provide a high level of predictive accuracy on small datasets. Such models are known to achieve a reduction in the bias and variance components of the prediction error of a model. The research for this thesis was aimed at the design of aggregate models and the selection of training datasets from large amounts of available data. The objectives for the model design and dataset selection were to reduce the bias and variance components of the prediction error for the aggregate models. Large datasets obtained from the UCI KDD Archive were used in the experiments. Two classification algorithms: See5 for classification tree modeling and K-Nearest Neighbour, were used in the experiments. The two methods of aggregate modeling that were studied are One-Vs-All (OVA) and positive-Vs-negative (pVn) modeling. While OVA is an existing method that has been used for small datasets, pVn is a new method of aggregate modeling, proposed in this thesis. Methods for feature selection from large datasets, and methods for training dataset selection from large datasets, for OVA and pVn aggregate modeling, were studied. The experiments of feature selection revealed that the use of many samples, robust measures of correlation, and validation procedures result in the reliable selection of relevant features for classification. A new algorithm for feature subset search, based on the decision rule-based approach to heuristic search, was designed and the performance of this algorithm was compared to two existing algorithms for feature subset search. The experimental results revealed that the new algorithm makes better decisions for feature subset search. The information provided by a confusion matrix was used as a basis for the design of OVA and pVn base models which are combined into one aggregate model. A new construct called a confusion graph was used in conjunction with new algorithms for the design of pVn base models. A new algorithm for combining base model predictions and resolving conflicting predictions was designed and implemented. Experiments to study the performance of the OVA and pVn aggregate models revealed the aggregate models provide a high level of predictive accuracy compared to single models. Finally, theoretical models to depict the relationships between the factors that influence feature selection and training dataset selection for aggregate models are proposed, based on the experimental results. |

Constantinou, D | PhD 2011 University of Pretoria | Ant Colony Optimisation Algorithms for Solving Multi-Objective Power-Aware Metrics for Mobile Ad Hoc Networks | A mobile ad hoc network (MANET) is an infrastructure-less multi-hop network where each node communicates with other nodes directly or indirectly through intermediate nodes. Thus, all nodes in a MANET basically function as mobile routers participating in some routing protocol required for deciding and maintaining the routes. Since MANETs are infrastructure-less, self-organizing, rapidly deployable wireless networks, they are highly suitable for applications such as military tactical operations, search and rescue missions, disaster relief operations, and target tracking. Building such ad-hoc networks poses a significant technical challenge because of energy constraints and specifically in relation to the application of wireless network protocols. As a result of its highly dynamic and distributed nature, the routing layer within the wireless network protocol stack, presents one of the key technical challenges in MANETs. In particular, energy efficient routing may be the most important design criterion for MANETs since mobile nodes are powered by batteries with limited capacity and variable recharge frequency, according to application demand. In order to conserve power it is essential that a routing protocol be designed to guarantee data delivery even should most of the nodes be asleep and not forwarding packets to other nodes. Load distribution constitutes another important approach to the optimisation of active communication energy. Load distribution enables the maximisation of the network lifetime by facilitating the avoidance of over-utilised nodes when a route is in the process of being selected. Routing algorithms for mobile networks that attempt to optimise routes while attempting to retain a small message overhead and maximise the network lifetime has been put forward. However certain of these routing protocols have proved to have a negative impact on node and network lives by inadvertently over-utilising the energy resources of a small set of nodes in favour of others. The conservation of power and careful sharing of the cost of routing packets would ensure an increase in both node and network lifetimes. This thesis proposes simultaneously, by using an ant colony optimisation (ACO) approach, to optimise five power-aware metrics that do result in energy-efficient routes and also to maximise the MANET’s lifetime while taking into consideration a realistic mobility model. By using ACO algorithms a set of optimal solutions – the Pareto-optimal set – is found. This thesis proposes five algorithms to solve the multi-objective problem in the routing domain. The first two algorithms, namely, the energy efficiency for a mobile network using a multi-objective, ant colony optimisation, multi-pheromone (EEMACOMP) algorithm and the energy efficiency for a mobile network using a multi-objective, ant colony optimisation, multi-heuristic (EEMACOMH) algorithm are both adaptations of multi-objective, ant colony optimisation algorithms (MOACO) which are based on the ant colony system (ACS) algorithm. The new algorithms are constructive which means that in every iteration, every ant builds a complete solution. In order to guide the transition from one state to another, the algorithms use pheromone and heuristic information. The last algorithm implemented, namely, the energy efficiency for a mobile network using a multi-objective, ant colony optimisation, multi-colony (EEMACOMC) algorithm uses a multiple colony ACO algorithm. |

Du Plessis, MC | PhD 2012 University of Pretoria | Adaptive Multi-Population Differential Evolution for Dynamic Environments | Dynamic optimisation problems are problems where the search space does not remain constant over time. Evolutionary algorithms aimed at static optimisation problems often fail to effectively optimise dynamic problems. The main reason for this is that the algorithms converge to a single optimum in the search space, and then lack the necessary diversity to locate new optima once the environment changes. Many approaches to adapting traditional evolutionary algorithms to dynamic environments are available in the literature, but differential evolution (DE) has been investigated as a base algorithm by only a few researchers. This thesis reports on adaptations of existing DE-based optimisation algorithms for dynamic environments. A novel approach, which evolves DE sub-populations based on performance in order to discover optima in an dynamic environment earlier, is proposed. It is shown that this approach reduces the average error in a wide range of benchmark instances. A second approach, which is shown to improve the location of individual optima in the search space, is combined with the first approach to form a new DE-based algorithm for dynamic optimisation problems. The algorithm is further adapted to dynamically spawn and remove sub-populations, which is shown to be an effective strategy on benchmark problems where the number of optima is unknown or fluctuates over time. Finally, approaches to self-adapting DE control parameters are incorporated into the newly created algorithms. Experimental evidence is presented to show that, apart from reducing the number of parameters to fine-tune, a benefit in terms of lower error values is found when employing self-adaptive control parameters. |

Helbig, M | PhD 2012 University of Pretoria | Solving Dynamic Multi-Objective Optimisation Problems using Vector Evaluated Particle Swarm Optimisation | Most optimisation problems in everyday life are not static in nature, have multiple objectives and at least two of the objectives are in conflict with one another. However, most research focusses on either static multi-objective optimisation (MOO) or dynamic single-objective optimisation (DSOO). Furthermore, most research on dynamic multi-objective optimisation (DMOO) focusses on evolutionary algorithms (EAs) and only a few particle swarm optimisation (PSO) algorithms exist. This thesis proposes a multi-swarm PSO algorithm, dynamic Vector Evaluated Particle Swarm Optimisation (DVEPSO), to solve dynamic multi-objective optimisation problems (DMOOPs). In order to determine whether an algorithm solves DMOO efficiently, functions are required that resembles real world DMOOPs, called benchmark functions, as well as functions that quantify the performance of the algorithm, called performance measures. However, one major problem in the field of DMOO is a lack of standard benchmark functions and performance measures. To address this problem, an overview is provided from the current literature and shortcomings of current DMOO benchmark functions and performance measures are discussed. In addition, new DMOOPs are introduced to address the identified shortcomings of current benchmark functions. Guides guide the optimisation process of DVEPSO. Therefore, various guide update approaches are investigated. Furthermore, a sensitivity analysis of DVEPSO is conducted to determine the influence of various parameters on the performance of DVEPSO. The investigated parameters include approaches to manage boundary constraint violations, approaches to share knowledge between the sub-swarms and responses to changes in the environment that are applied to either the particles of the sub-swarms or the non-dominated solutions stored in the archive. From these experiments the best DVEPSO configuration is determined and compared against four state-of-the-art DMOO algorithms. |

Malan, KM | PhD 2014 University of Pretoria | Characterising Continuous Optimisation Problems for Particle Swarm Optimisation Performance Prediction | Real-world optimisation problems are often very complex. Population-based metaheuristics, such as evolutionary algorithms and particle swarm optimisation (PSO) algorithms, have been successful in solving many of these problems, but it is well known that they sometimes fail. Over the last few decades the focus of research in the field has been largely on the algorithmic side with relatively little attention being paid to the study of the problems. Questions such as ‘Which algorithm will most accurately solve my problem?’ or ‘Which algorithm will most quickly produce a reasonable answer to my problem?’ remain unanswered. This thesis contributes to the understanding of optimisation problems and what makes them hard for algorithms, in particular PSO algorithms. Fitness landscape analysis techniques are developed to characterise continuous optimisation problems and it is shown that this characterisation can be used to predict PSO failure. An essential feature of this approach is that multiple problem characteristics are analysed together, moving away from the idea of a single measure of problem hardness. The resulting prediction models not only lead to a better understanding of the algorithms themselves, but also takes the field a step closer towards the goal of informed decision-making where the most appropriate algorithm is chosen to solve any new complex problem. |

Grobler, J | PhD(Eng) 2015 University of Pretoria | The Heterogeneous Meta-hyper-heuristic: From Low Level Heuristics to Low Level Meta-heuristics | Meta-heuristics have already been used extensively for the successful solution of a wide range of real world problems. A few industrial engineering examples include inventory optimization, production scheduling, and vehicle routing, all areas which have a significant impact on the economic success of society. Unfortunately, it is not always easy to predict which meta-heuristic from the multitude of algorithms available, will be best to address a specific problem. Furthermore, many algorithm development options exist with regards to operator selection and parameter setting. Within this context, the idea of working towards a higher level of automation in algorithm design was born. Hyper-heuristics promote the design of more generally applicable search methodologies and tend to focus on performing relatively well on a large set of different types of problems. This thesis develops a heterogeneous meta-hyper-heuristic algorithm (HMHH) for single-objective unconstrained continuous optimization problems. The algorithm development process focused on investigating the use of meta-heuristics as low level heuristics in a hyper-heuristic context. This strategy is in stark contrast to the problem-specific low level heuristics traditionally employed in a hyper-heuristic framework. Alternative low level meta-heuristics, entity-to-algorithm allocation strategies, and strategies for incorporating local search into the HMHH algorithm were evaluated to obtain an algorithm which performs well against both its constituent low level meta-heuristics and four state-of-the-art multi-method algorithms. Finally, the impact of diversity management on the HMHH algorithm was investigated. Hyper-heuristics lend themselves to two types of diversity management, namely solution space diversity (SSD) management and heuristic space diversity (HSD) management. The concept of heuristic space diversity was introduced and a quantitative metric was defined to measure heuristic space diversity. An empirical evaluation of various solution space diversity and heuristic space diversity intervention mechanisms showed that the systematic control of heuristic space diversity has a significant impact on hyper-heuristic performance. |

Cleghorn, CW | PhD 2017 University of Pretoria | Particle Swarm Optimization: Empirical and Theoretical Stability Analysis | Particle swarm optimization (PSO) is a well-known stochastic population-based search algorithm, originally developed by Kennedy and Eberhart in 1995. Given PSO’s success at solving numerous real world problems, a large number of PSO variants have been proposed. However, unlike the original PSO, most variants currently have little to no existing theoretical results. This lack of a theoretical underpinning makes it difficult, if not impossible, for practitioners to make informed decisions about the algorithmic setup. This thesis focuses on the criteria needed for particle stability, or as it is often refereed to as, particle convergence. While new PSO variants are proposed at a rapid rate, the theoretical analysis often takes substantially longer to emerge, if at all. In some situation the theoretical analysis is not performed as the mathematical models needed to actually represent the PSO variants become too complex or contain intractable subproblems. It is for this reason that a rapid means of determining approximate stability criteria that does not require complex mathematical modeling is needed. This thesis presents an empirical approach for determining the stability criteria for PSO variants. This approach is designed to provide a real world depiction of particle stability by imposing absolutely no simplifying assumption on the underlying PSO variant being investigated. This approach is utilized to identify a number of previously unknown stability criteria. This thesis also contains novel theoretical derivations of the stability criteria for both the fully informed PSO and the unified PSO. The theoretical models are then empirically validated utilizing the aforementioned empirical approach in an assumption free context. The thesis closes with a substantial theoretical extension of current PSO stability research. It is common practice within the existing theoretical PSO research to assume that, in the simplest case, the personal and neighborhood best positions are stagnant. However, in this thesis, stability criteria are derived under a mathematical model where by the personal best and neighborhood best positions are treated as convergent sequences of random variables. It is also proved that, in order to derive stability criteria, no weaker assumption on the behavior of the personal and neighborhood best positions can be made. The theoretical extension presented caters for a large range of PSO variants. |

Gary Pampara | PhD (Industrial Engineering) 2021 Stellenbosch University | Particle Swarm Optimisation for Dynamically Constrained, Dynamic Optimisation Problems | Real-world problems are usually time-dependent problems, which change over time. Due to the dynamic nature of real problems, it is important to be able to optimize problems that also change over time. The natural extension to problems that change over time are problems that change over time but may or may not have valid solutions when considering other factors about the optimization problem. Relating this class of dynamic yet constrained optimization problems back to the real world allows the comparison of such dynamic constrained optimization problems to the real problem of the stock market. Stocks are bought and sold on the stock market every day and at far greater velocities than a few decades ago. When a prospector attempts to acquire some stocks, a number of constraints are immediately present. Firstly, the prospector may only have a fixed budget from which to buy stocks. Secondly, the stock market is a dynamic environment with the price of stocks fluctuating frequently. Lastly, constraints exist for the stock purchasing on the stock market itself, such as a limited supply of stocks. When the prospector attempts to purchase stocks, a balance must be struck between the volatility of the dynamic environment (influencing the number of stocks the prospector can afford to purchase) and the actual availability of stocks (which constrains the number of any stocks that may be purchased). Dynamic constrained optimization problems define the combination of problem space changes and possible constraint changes. |