2

As part of our Poolkeh paper, we thought to use nevergrad. However, sadly it doesn't always return the same result, nor the most optimal one.

We tried DiscreteOnePlusOne as an optimizer, but it didn't find the optimal results. OnePlusOne worked ok, but didn't give the best solution and it needed some hints like this one:

if s1 < s2*(1+r0):
    return np.Inf

We explored the case of pooling COVID-19 tests with two steps, here is the complete code:

!pip install nevergrad
import numpy as np
def optimal(r0: float, s1:int, s2:int):
  r0 = r0/100
  if s1 < s2*(1+r0):
    return np.Inf

  p1=1-np.power(1-r0,s1)
  r1=r0/p1
  p2=1-np.power(1-r1,s2)
  return 1/s1 + p1/s2 + p1*p2

import nevergrad as ng

def findBestStategy(r0: float):
  '''
  r0 is in %
  '''
  parametrization = ng.p.Instrumentation(
      r0 = r0, 
      s1=ng.p.Scalar(lower=1, upper=100).set_integer_casting(),
      s2=ng.p.Scalar(lower=1, upper=100).set_integer_casting(),
  )
  optimizer = ng.optimizers.OnePlusOne(parametrization=parametrization, budget=2000, num_workers=1)
  recommendation = optimizer.minimize(optimal)
  return recommendation.kwargs
findBestStategy(1)
{'r0': 1, 's1': 23, 's2': 5}

This is not the optimal, but really it's close :

optimal(1, 23,5)
0.13013924406458133
optimal(1, 24,5) 
0.13007783167425113
  1. How can we make nevergrad more robust?
  2. Which optimizer should we use?
  3. Is there a way to run nevergrad multiple times with different "initial conditions" and take the optimal results among all multiple tries?
0x90
  • 39,472
  • 36
  • 165
  • 245
  • 1
    Why are you looking for a non-differentiable solution? If it is truly the case that you are looking for integer `s1` and `s2`, this search space is so small you can just search - no? Is this a toy optimization and not the actual function you are interested in? Or is this meant to be a more general question? – modesitt May 10 '20 at 07:18
  • 1
    @modesitt the search space is more complex than 100x100. It's just a toy example that shows how nevergrad still doesn't find the optimal solution. – 0x90 May 10 '20 at 08:04
  • `However, sadly it doesn't always return the same result, nor the most optimal one.` What are your expectations? The former is because of non-controlled random-seeding (see docs) and the latter just shows the general guarantees of those solvers: none (no guarantees about local or global-convergence). Tuning something like this on a non-representative toy-problems makes no sense to me as the transfer to other instance-statistics might kill every tuning. Tuning *heuristics* without a testbed and care about statistics is usually a bad idea, so is asking others without access to your testbed. – sascha May 10 '20 at 09:48
  • @sascha is there a way to run 100 independent minimization processes and to take the strategy that was the most frequent, or to pick the one which leads the optimal results from all the 100 tries? – 0x90 May 11 '20 at 18:08

0 Answers0