1

In quantum computers those 2 effects should be seen:

1) If an operator acts on an arbitrary QuBit Qn of a quantum system S consisting of several QuBits than we get a new quantum system S' where the amplitues of ALL QuBits have changed.

2) If an operator acts on one QuBit of a quantum system T consisting of two entangled QuBits than both QuBits are affected.

So which one of these is the reason for the exponential speed-up expected from quantum computers?

user3666197
  • 1
  • 6
  • 50
  • 92
Mike
  • 11
  • 2

1 Answers1

0

Q : "...which one of these is the reason...?"

None.

As of 2020-Q2, the so far available Q-devices operate in constant [TIME], i.e. having O(1) (in)-dependency of TimeDOMAIN duration of operations on the problem-complexity, so neither the isolated assumption 1) nor the isolated assumption 2) make any difference on their own, but both are the part of the quantum Level-of-Detail observed World, as we know it, that is ( by design intrinsically ) Q-[PARALLEL].

Actually,
the World as we know it, is the Q-SpaceTime continuum itself, internally discrete in both Time and Space at such a LoD, we will never have a trouble with in any foreseeable Q-device computing, if considering The Universe not being a Q-device of it's own kind, which it obviously is ;)

All happens "now", not one after another ( as if sequenced in some pipelined fashion ).
( The full depth of this topic way exceeds the format of this Q/A-site. )

Sure,
pedantic and Q-orthodox users may claim a need to use some [SPACE]-Domain downscaling tricks due to current Q-devices limits of the physical-Q-engine, yet even these retain the O(1) (in)-dependence, as the asymptotic complexity (in)-dependence model of the QPU-based computing.

Q : "...the exponential speed-up expected from quantum computers?"

Given the above, there is but a market-making motivation to call Q-devices "expected" to have
(cit.): "exponential speed-up".

Given the target Q-device operates on O(1) scaling, all the previously known technologies are compared to a constant-Time "processing".

In this context,
the category of Speedup
will be the way more "showing the better factor of comparison", the worse the original processing is (was).

The Q-device ( no matter how easy or bad the original processing was )
will be
and
will always remain a constant-Time Q-"processor"


The BONUS Part :

So,
the O(1) Q-device will always show itself best against any worst scaling of the original problem-processing strategy, where the exponentially scaling is not our worst enemy in the Complexity ZOO.

Similarly, the very same Q-device will show itself as a poor-neighbour once compared to any currently known O(1) processing that may and often will outperform any Q-device due to small if not missing at all initial-Setup / Result-detection / termination-Gap(s), known as the Q-device principal add-on processing-latencies ( which do not play any significant role for the former case, where nature of the shift-of-paradigm in the field of complexity works strongly against any classical device(s) )

user3666197
  • 1
  • 6
  • 50
  • 92