Python uses dynamic typing. That is, you can only know the type of an object with certainty at runtime.
A consequence of this is that the only way to know a piece of code uses the right data types is to run it. Thus, testing becomes very important. But testing all the code paths in a program can take a long time.
The problem is exacerbated when many people work on a program for a long time since it's hard to get devs to write documentation and to build consistent interfaces, so you end up having a limited number of people who understand what types should be used and everyone spends a lot of time waiting for tests to run.
Still, the author's view is overly pessimistic. Many companies have large Python codebases and are able to do so by having extensive, and expensive, test suites and oncall rapid response teams.
For instance, Facebook has hundreds of millions of lines of code of which 21% are Python (as of 2016). With this level of preexisting investment, in the short term it is much cheaper to develop ways of making Python safer than to migrate code to a new language (like Julia or Rust).
And we can see this in the ecosystem.
Python has been modified to address the typing problem through the introduction of type annotations. While these are not enforced at runtime they provide a fast (realtime) check of type safety that significantly reduces the need to rely on tests and can be used to enforce interfaces using tools like Pyre and mypy. This makes having a large Python codebase of the sort the author discusses much more manageable.