By the prime counting function, there are about (2^4096)/ln(2^4096), or close enough to 2^4085 prime numbers under 2^4096, which is close enough to 10^1360 to not sweat the piddly factors that may be off by.
I'd tell you to "go ahead and start computing that and tell me when you're done", however, I like the universe I live in, and the entire information content of the boundary of the observable universe is something like 4*10^122 bits [1]. So you're talking about creating a black hole vastly, vastly, vastly, 10-to-the-power-of-thousand+ times larger than the observable universe, which some of your fellow universe residents may find to be a hostile act and we probably aren't going to let you finish.
While you can define such a table as having "O(1)" lookup properties in the sense that on average the vast, vast, vast, vast, vast, dwarfing-the-observable-universe-by-hundreds-of-orders-of-magnitude light years you'd have to travel for the answer to a given query can be considered "O(1)" since it's on average pretty much the same for all lookups, it's constant in a rather useless sense.
The idea is to do it once. A lot can likely be compressed, while keeping access time log-linear. Store it passively, so that it won't need power except for reading, like mask ROM or CD-ROM do not.
No cryptographer cares about time-complexity on its own. Even the naive asymptotic cost-model is `t * (P + M)`, where P is the number of processors and M the amount of memory (including ROM/code-size). And a more serious cost-model is `t * A` where A is the chip area (~transistor count). This considers less obvious costs, like the size of the memory access circuitry, which can be substantial when you have a large number of parallel processors.
In any of these models the time would be multiplied with the size of the lookup table, resulting in a cost much higher that number-field-sieve.
Plus you need to consider the (amortized) cost of populating the lookup table.
It can depend on your exact definition of "broken", for example do you consider unpublished breaks resulting in "broken" or not?
> RSA encryption is not broken by factoring keys but by exploiting implementation flaws.
> Factoring a 2048-bit RSA key would require more energy than the world produces in a year, as explained here.
The above should probably contain some caveat's like "Assuming a GNFS attacker, ..." or "ignoring hypothetical non-public mathematical breakthroughs"
Using a lookup table to factor numbers is a faster algorithm and has complexity O(1).
By the prime counting function, there are about (2^4096)/ln(2^4096), or close enough to 2^4085 prime numbers under 2^4096, which is close enough to 10^1360 to not sweat the piddly factors that may be off by.
I'd tell you to "go ahead and start computing that and tell me when you're done", however, I like the universe I live in, and the entire information content of the boundary of the observable universe is something like 4*10^122 bits [1]. So you're talking about creating a black hole vastly, vastly, vastly, 10-to-the-power-of-thousand+ times larger than the observable universe, which some of your fellow universe residents may find to be a hostile act and we probably aren't going to let you finish.
While you can define such a table as having "O(1)" lookup properties in the sense that on average the vast, vast, vast, vast, vast, dwarfing-the-observable-universe-by-hundreds-of-orders-of-magnitude light years you'd have to travel for the answer to a given query can be considered "O(1)" since it's on average pretty much the same for all lookups, it's constant in a rather useless sense.
[1]: https://math.ucr.edu/home/baez/information.html
The great thing about math is that we can prove things without needing to physically construct them.
Faster than what? Are you factoring in the time to build the lookup table for primes greater than a google?
The idea is to do it once. A lot can likely be compressed, while keeping access time log-linear. Store it passively, so that it won't need power except for reading, like mask ROM or CD-ROM do not.
Faster than exp(((64/9)^1/3 + o(1))*((log n)^1/3 (log log n)^2/3)). The time building the table is not counted as that happens ahead of time.
No cryptographer cares about time-complexity on its own. Even the naive asymptotic cost-model is `t * (P + M)`, where P is the number of processors and M the amount of memory (including ROM/code-size). And a more serious cost-model is `t * A` where A is the chip area (~transistor count). This considers less obvious costs, like the size of the memory access circuitry, which can be substantial when you have a large number of parallel processors.
In any of these models the time would be multiplied with the size of the lookup table, resulting in a cost much higher that number-field-sieve.
Plus you need to consider the (amortized) cost of populating the lookup table.
Ok, where are you going to keep those yottabytes of tables?
Chainsaws? ICMP echo? Tetris? There are a lot of options for storage available that we're really not making good use of at the moment. [1]
[1] http://tom7.org/papers/murphy2022harder.pdf
In memory. The abstract machine typically has infinite memory.