Through a successful bid for University pump-priming funds, the Robotics Lab developed RoboSpartan, an extensive new infrastructure that will permit an increased understanding and optimization of the behaviours of robotic systems, and aid targeted experimentation in hardware. This project transfers expertise developed at the University in analyzing biological simulations, that led to the development of the spartan package of statistical techniques. The platform will possess the capability to:
automate parameter value sampling and result analysis for uncertainty and sensitivity analyses
generate simulation configuration files, placing the sample parameter values in place of current values
generate grid engine scripts to aid executing each generated sample in the most efficient manner
apply machine learning approaches to develop a surrogate model, for use where analyses become less tractable
utilize evolutionary and Bayesian computation techniques to identify parameter regions giving rise to desired behaviours.
RoboSpartan is open source, and implemented within Shiny and R. The developing platform is supported by example data and video demonstrations of functionality, detailed in the tabs.
RoboSpartan is currently available via our Github page: https://github.com/kalden/robospartan
We suggest running the download in RStudio. Successful application of RoboSpartan is dependent on a number of R packages:
Here, and in supporting documentation for this tool, we show how RoboSpartan can be used to understand 5 parameters that influence the Omega algorithm used in swarm robotics. We examine five parameters, and two simulation responses.
We are going to generate parameter value sets that change the values of 5 parameters:
quantity (number of robots), range between 2 and 28, with a calibrated value of 20
omega (ticks), range between 15 and 35, with a calibrated value of 25
shadowed_avoidance_radius (m), range between 0.05 and 0.15, with a calibrated value of 0.1
illuminated_avoidance_radius (m), range between 0.15 and 0.3, with a calibrated value of 0.15
cool_off_period (ticks), range between 0 and 10, with a calibrated value of 5
We are interested in seeing the effects the parameter values have on two outputs:
distanceToBeacon: distance the swarm is from the beacon at the end of the simulation
efficiency: swarm efficiency in reaching the beacon
In the github repository, the file epuck_omega_algorithm.argos is the simulation that we will modify using RoboSpartan, collating the executed results prior to analysis using the second RoboSpartan app. In the descriptions in the other tabs, we reference other files in the repository that can be used to show the functionality in RoboSpartan.
RoboSpartan can generate parameter value sets for three sensitivity analysis techniques, one local (that changes the value of one parameter at a time), and two global (that change all simultaneously). We have not duplicated the detail of each technique here, instead we refer the reader to the vignettes for the spartan package
Video of Using App:
If running locally, in the main roboSpartan folder, open app.R in RStudio. On the toolbar above the file editor should be a button labelled 'Run App'. Click the down arrow beside 'Run App', and choose 'Run External'. Then press 'Run App', and the parameter sampling app will open in a web browser. Alternatively you can run this app online at https://robospartan.shinyapps.io/sampling/. In this app you can:
Select the analysis technique, for which you are generating samples, from the drop down box
Declare your parameter names and ranges. If one of your parameters is a whole number, RoboSpartan can note this and round the sampled value accordingly, by pressing the checkbox.
State the output responses from the simulation, for which you are interested in understanding the impact occurred by a change in parameter value State your sampling settings. For a latin-hypercube, you will need to state the number of samples to generate, and the sampling algorithm (normal or optimal - note optimal can take a long time). For eFAST, you need to state the number of samples to generate from each curve, and the number of resample curves. For robustness analysis there are no additional settings. In all cases, you will need to state the number of replicate executions you want to do for each parameter value set (if your sim is stochastic), as this will be included on generated cluster scripts.
If you now press the 'Create Sample' button, a sample is created and shown in the panel on the right hand side of the application. Below the samples you have two buttons: one to download the sample ('Download Data'), and one to download the settings used in generating these samples ('Download settings'). The settings is handy to download, as you can enter this file into the next app that analyses these samples, to save having to input the same parameter and measure information again.
Finally, bottom of the left hand panel is a section to upload and generate ARGoS simulation files. This takes an ARGoS simulation file and alters the values of each parameter in the file, generating one file to match each sample shown on the right hand side. You can try this by entering the parameters and measures for the omega algorithm, described in the second tab, and specify the ARGoS file to modify as the 'epuck_omega_algorithm.argos' file included in the git repository. With the file uploaded, pressing 'Download Modified ARGoS Files' will download a zip file of all simulation configuration files for the generated sample. To generate scripts to run this experiment on a sun grid engine, press the 'Generate SGE Cluster Script' button. This downloads two files: one to run the parameter sets on an SGE, and the second to post-process the results into a format that can be input into the analysis app.
Once you have your parameter value sets and performed the executions, you can use the second RoboSpartan app to analyse the data. To aid demonstration of this process, in the github repository there is a folder, 'Test_Settings_and_Results', that contains RoboSpartan settings files and simulation execution results for all three sensitivity analyses. From RStudio, open the app.R file that is contained in the 'analysis_platform' folder, again running externally as detailed for the sampling app.
You will need to upload a settings file created when the parameter sample was generated in the previous app. This saves you having to enter all the simulation parameter and response information again. If using the examples in the 'Test_Settings_and_Results' folder: for robustness analysis upload 'omega_Robustness_settingFile.csv'; for latin-hypercube upload 'omega_LHC_settingFile.csv'; for eFAST upload 'omega_eFAST_settingFile.csv'
RoboSpartan will then populate the screen with your simulation information, and show on the left hand side that it is aware which type of analysis is being performed.
You will then have to upload a summary of all executions performed for each parameter set in sampling. If using the example data: for robustness analysis upload 'omegaAlgorithmRobustnesscombinedParamsAndResults.csv'; for latin-hypercube upload '/omegaAlgorithmLHCcombinedParamsAndResults.csv'; for eFAST upload 'eFAST_Sample_Outputs.zip'. Note the latter is a zip file comprised of several CSV files, one for each curve-parameter-measure pair. See the spartan package vignettes for more information on this result file.
Should you wish, you can provide a scale in which the simulation responses are measured, that is used when plotting the simulation results (by amending the labels on the appropriate axis).
You can then change some default analysis settings, dependent on the analysis. For robustness analysis, you can change the value for which an A-Test score is deemed scientifically significant should you wish. For efast analysis, you can change the confidence interval used when calculating statistical significance using the t-test (again see the description of the technique in the spartan R package). There are no additional arguments for a latin-hypercube PRCC analysis.
With all the above successfully stated, you can press the button at the bottom of the app to generate the results. These will be shown in the main panel. You will have the option to look at each result plot in the browser. Alternatively, you can download all the produced statistics for the analysis (graphs as well as CSV files, the settings file, and executions file) as a zip file. All results are deleted when you leave the app.
We have recently shown how machine learning algorithms, trained on a simulation dataset, can speed up and permit execution of intensive statsitical analyses, by predicting simulation output. RoboSpartan App 3 permits training of five machine learning algorithms from an LHC or eFAST dataset, and generation of a ensemble that makes a prediction informed by the predictions of all five, weighting each on algorithms predictive performance on the dataset. For a full description of this demonstration, see our IEEE publication. To aid demonstration of this process in RoboSpartan, in the github repository there is a folder, 'Test_Settings_and_Results', that contains RoboSpartan settings files and simulation execution results.
From RStudio, open the app.R file that is contained in the 'Machine_learning_emulator_app' folder, again running externally as detailed for the sampling app. With the app open:
You will need to upload a settings file created when the parameter sample was generated in App 1. This saves you having to enter all the simulation parameter and response information again. If using the examples in the 'Test_Settings_and_Results' folder, you can upload 'omega_LHC_settingFile.csv'
You can then upload a CSV file of data to train the machine learning algorithms on. This should consist of parameter values in columns, followed by the simulation results under those conditions. If using the examples, upload 'LHC_Summary_for_ML.csv'. This was created by App 2, summarising the replicate executions of each parameter set into one summary set for each set. As an LHC attempts to cover the complete parameter space, this should be a good set on which to train the emulators
This set is partitioned into training, test, and validation sets. The next box that appears permits you to set the percentages used when the data is split.
The set of buttons below allows you to specify which machine learning algorithms you want to train. Once you have selected two, you will be able to select Ensemble, that will combine multiple algorithms into one predictice tool. If you select neural network, there are additional settings to add. As spartan uses the neuralnet package, you will need to specify the list of potential network structures that you want to examine, and spartan will choose the one that best predicts the data (using 10-fold cross validation). Let's assume you have five parameters and two outputs. The assumption is that the nodes in the hidden layer will be less than the number of parameter inputs. So you may wish to examine one hidden layer of 4 nodes, or maybe two hidden layers of 4 then 3 nodes, or three of 4,4 and 3 nodes, etc. To specify each in RoboSpartan, you would enter each into the network structure box. For more than one hidden layer, separare the nodes with commas (e.g. 4,3). Future versions of spartan/RoboSpartan will automate this. Note that, in our experience, Gaussian Process can take a significant amount of time to generate.
With the settings complete, press the 'Generate Predictive Models' button. A dialog will inform you that these are in process of being generated.
Once complete, you can examine the predictive accuracy of each technique using the plots in the main panel, and can download all the results and produced emulators/ensemble as a zip file.
These predictive models can then be used in place of the simulator to perform sensitivity analyses and approximate bayesian computation, and explore the parameter space using a GA. A description of how to do just that can be seen in this spartan vignette