A common way to generate polynomial minimax approximation is to use the Remez exchange algorithm published by Russian mathematician Evgeny Remez in 1934. This is a numerical procedure that often involves ill-conditioned systems of equations. As a consequence, it is is usually implemented with the help of an arbitrary-precision library. For example, in the implementation of the Remez algorithm that I use I configured the library for 1024-bit precision.
For reasonably well-behaved functions, various variants of the Remez algorithm can find approximations very close to the mathematical minimax polynomial. The problem, as noted in the question, is what happens when one moves the generated coefficients of the polynomial to a finite-precision floating-point computation. One often finds the minimax property of the approximation impaired, sometimes significantly so. There are two sources of error at play. First, the generated coefficients cannot be represented accurately in the finite precision floating-point format. Second, the evaluation of the polynomial uses finite-precision operations instead of mathematical operations with infinite precision.
The first problem is the easier one to address. As one can see from some quick experiments, simply rounding the coefficients to the finite-precision format doesn't accomplish the desired near minimax result. By using a finite-precision format, we basically transform from an N-dimensional continuous space to an N-dimensional discrete lattice, and to do this properly, we need to find the closest lattice points. This is a solvable but hard problem, which is usually made easier through the use of heuristics. Relevant literature:
N. Brisebarre, J.-M. Muller, and A. Tisserand, "Computing machine-efficient polynomial approximations". ACM Transactions on Mathematical Software, Vol. 32. No. 2, June 2006, pp. 236-256.
(online)
Nicolas Brisebarre and Sylvain Chevillard, "Efficient polynomial L∞-approximations", In 18th IEEE Symposium on Computer Arithmetic, June 2007, pp. 169-176
(online)
Florent de Dinechin and Christoph Lauter, "Optimizing polynomials for floating-point implementation", ArXiv preprint 2008
(online)
The Sollya tool uses these techniques from the literature for its fpminimax
command. Worth checking out in addition to Maple's and Mathematica's facilities for generating minimax polynomial approximations, as it often results in superior approximations in my experience.
The second problem, how to account for evaluation with finite-precision floating-point computation and how to adjust the coefficients of a polynomial approximation accordingly, are still subject to research. Some initial results:
Tor Myklebust, "Computing accurate Horner form approximations to special functions in finite precision arithmetic", ArXiv manuscript 2015
(online)
Denis Arzelier, Florent Bréhard, Mioara Joldes, "Exchange algorithm for evaluation and approximation error-optimized polynomials", In 26th IEEE Symposium on Computer Arithmetic, June 2019, pp. 30-37
(online)
Note that the first publication came about due to a question I asked on Stackoverflow.
For my own use I am using a heuristic search for finding approximations optimized to account for both representational error in the coefficients and evaluation error in the polynomial evaluation. it can be loosely described as a form of simulated annealing. I have also checked into the use of genetic programming, but the preliminary results did not look promising, so I stopped pursuing this approach.