Performance
User Interface
- flopt.performance.compute(datasets, solvers='all', timelimit=None, msg=True, save_prefix=None, **kwargs)[source]
Measure the performance of (dataset, solver)
- Parameters:
datasets (list of Dataset or Dataset or Problem) – datasets
solvers (list of solvers or solver) – solvers
timelimit (float) – timelimit
msg (bool) – if true, then display log during solve
save_prefix (str) – the path in which each log is saved
- Returns:
logs; logs[solver.name, dataset.name, instance.name] = log
- Return type:
dict
Examples
We calculate the performance of (dataset, solver).
import flopt # datasets tsp_dataset = flopt.performance.get_dataset("tsp") func_dataset = flopt.performance.get_dataset("func") # compute the performance logs = flopt.performance.compute([func_dataset, tsp_dataset], timelimit=2, msg=True) # visualize the performance log_visualizer = flopt.performance.LogVisualizer(logs) log_visualizer.plot()
We can select the solver to calculate the performance.
rs_solver = flopt.Solver("Random") # compute the performance logs = flopt.performance.compute( [func_dataset, tsp_dataset], # dataset list [rs_solver], # solver list timelimit=2, msg=True ) # visualize the performance log_visualizer = flopt.performance.LogVisualizer(logs) log_visualizer.plot()
We can use user defined problem as dataset
# prob is user defined problem flopt.performance.compute(prob, timelimit=2, msg=True)
- class flopt.performance.LogVisualizer(logs=None)[source]
Log visualizer from logs.
We input logs by constructor or loading from performance directory.
- Parameters:
logs (dict) – logs[dataset, instance, solver_name] = log
Examples
log_visualizer = LogVisualiser() log_visualizer.load( solver_names=['Random', '2-Opt'], datasets=['tsp'] ) log_visualizer.plot()
- load_log(solver_name, dataset, load_prefix='/home/docs/checkouts/readthedocs.org/user_builds/flopt/checkouts/latest/flopt/../performance')[source]
load log pickle file from load_prefix/solver_name/dataset/instance/log.pickle
- Parameters:
solver_name (str) – solver name
dataset (str) – dataset name
load_prefix (str) – log saved path
- plot(xitem='time', yscale='linear', plot_type='all', save_prefix=None, col=2)[source]
plot all logs
- Parameters:
xitem (str) – x-label name. ‘time’ or ‘iteration’
yscale (str) – linear or log
plot_type (str) – all: create figures for each dataset. each: create figures for each instance. noshow: do not create figures.
col (int) – #columns of figure
- class flopt.performance.CustomDataset(name='CustomDataset', probs=[])[source]
Creaet Dataset
- Parameters:
name (str) – dataset name
probs (list of Problem) – problems
Examples
We have a problem with the compatibility of the solvers.
import flopt from flopt import Variable, Problem, Solver from flopt.performance import CustomDataset a = Variable('a', lowBound=2, upBound=4, cat='Continuous') b = Variable('b', lowBound=2, upBound=4, cat='Continuous') prob = Problem() prob += a + b
We calculates the performance of (solver, problem) by using CusomDataset
cd = CustomDataset(name='user') cd += prob # add problem
Then, we run to calculate the performance.
flopt.performance.compute(cd, timelimit=2, msg=True)
After that, we can see the performace each solver.
flopt.performance.performance(cd)
We can select the solvers to calculate the performance.
rs_solver = Solver('Random') tpe_solver = Solver('OptunaTPE') cma_solver = Solver('OptunaCmaEs') htpe_solver = Solver('Hyperopt') logs = flopt.performance.compute( cd, # dataset or dataset list [rs_solver, tpe_solver, cma_solver, htpe_solver], # solver list timelimit=2, msg=True ) # visualize he performance log_visualizer = flopt.performance.LogVisualizer(logs) log_visualizer.plot()
Datasets
- class flopt.performance.BaseDataset[source]
Base Dataset
- class flopt.performance.tsp_dataset.TSPDataset[source]
TSP Benchmark Instance Set
- Parameters:
instance_names (list) – instance name list
External Interface
Compute and view the performance of (dataset, algo).
Compute Performance
python compute_performance.py algo save_algo_name --datasets datasetA datasetB --params param_file
algo is the algorithm, we select algorithm from flopt.Solver_list().
The result of compute the performance is save in ./performance/save_algo_name/dataset_name/instance_name/log.pickle.
dataset can be select from flopt.Dataset_list().
param_file’s format is parameter = value, for example, as follows.
n_trial = 10000
timelimit = 30
Example for running the script.
python compute_performance.py 2-Opt 2-Opt_timelimit30 --datasets tsp --params default.param
python compute_performance.py RandomSearch RandomSearch_iteration100 --datasets tsp --params default.param
python compute_performance.py OptunaCmaEsSearch OptunaCmaEsSearch --datasets func --params default.param
View Performance
python view_performance.py --algo algoA algoB --datasets datasetA datasetB
python view_performance.py --algo algoA algoB --datasets datasetA datasetB --xitem iteration
The result of compute the performance is save in ./performance/algo/dataset_name/instance_name/log.pickle.
dataset can be select from flopt.Dataset_list().
xitem can be choised from ‘time’ or ‘instance’.
Example for running the script.
python view_performance.py --algo 2-Opt_timelimit30
python view_performance.py -- datasets tsp
- flopt.performance.performance(datasets, solver_names=None, xitem='time', yscale='linear', plot_type='all', save_prefix=None, time=None, iteration=None, load_prefix=None)[source]
plot performance of each (dataset, algo) where algo is solver.name
- Parameters:
datasets (list of Dataset or a Problem) – datasets name
solver_names (list of str) – solver names
xitem (str) – x-label item of figure (time or iteration)
yscale (str) – linear or log
plot_type (str) – all: create figures for each dataset. each: create figures for each instance. noshow: do not create figures.
save_prefix (str) – prefix of fig save name
time (int or float) – summary logs whose time less than time
iteration (int) – summary logs whose iteration less than iteration
load_prefix (str) – the path in which each log is saved
See also