To put it in context, Bolotin imagines a computer capable of solving it over a reasonable running time of, say, a year. Such a computer would need to execute each elementary operation on a timescale of the order of 10^(-3x10^23) seconds.
This time scale is so short that it is difficult to imagine. But to put it in context, Bolotin says there would be little diﬀerence between running such a computer over one year and, say, one hundred billion years (10^18 seconds), which is several times longer than the age of the universe.
What’s more, this time scale is considerably shorter than the Planck timescale, which is roughly equal to 10^-43 seconds. It’s simply not possible to measure or detect change on a scale shorter than this. So even if there was a device capable of doing this kind of calculating, there would be no way of detecting that it had done anything.
Interesting theory linking information theory and quantum mechanics. Basically, it says that we don’t see quantum behavior at the macroscopic level because the solutions to the equations that govern them would take too long to calculate, even if we assume the entire universe is the computer. Did I get that right? Probably not. Still I love concepts like this, it makes me wonder, if our entire universe really was taking place inside of a computer, how would one know? The base unit of our computers is a bit, what would a bit look like from the perspective of a running program? Would it look like the Planck constant? Perhaps that’s the time between cycles? Would it matter?