Time needed to factor large integers

(johndcook.com)

18 points | by ibobev 5 days ago ago

12 comments

  • DoctorOetker a day ago ago

    It can depend on your exact definition of "broken", for example do you consider unpublished breaks resulting in "broken" or not?

    > RSA encryption is not broken by factoring keys but by exploiting implementation flaws.

    > Factoring a 2048-bit RSA key would require more energy than the world produces in a year, as explained here.

    The above should probably contain some caveat's like "Assuming a GNFS attacker, ..." or "ignoring hypothetical non-public mathematical breakthroughs"

  • charcircuit a day ago ago

    Using a lookup table to factor numbers is a faster algorithm and has complexity O(1).

    • jerf a day ago ago

      By the prime counting function, there are about (2^4096)/ln(2^4096), or close enough to 2^4085 prime numbers under 2^4096, which is close enough to 10^1360 to not sweat the piddly factors that may be off by.

      I'd tell you to "go ahead and start computing that and tell me when you're done", however, I like the universe I live in, and the entire information content of the boundary of the observable universe is something like 4*10^122 bits [1]. So you're talking about creating a black hole vastly, vastly, vastly, 10-to-the-power-of-thousand+ times larger than the observable universe, which some of your fellow universe residents may find to be a hostile act and we probably aren't going to let you finish.

      While you can define such a table as having "O(1)" lookup properties in the sense that on average the vast, vast, vast, vast, vast, dwarfing-the-observable-universe-by-hundreds-of-orders-of-magnitude light years you'd have to travel for the answer to a given query can be considered "O(1)" since it's on average pretty much the same for all lookups, it's constant in a rather useless sense.

      [1]: https://math.ucr.edu/home/baez/information.html

      • charcircuit a day ago ago

        The great thing about math is that we can prove things without needing to physically construct them.

        • 8 hours ago ago
          [deleted]
    • cipehr a day ago ago

      Faster than what? Are you factoring in the time to build the lookup table for primes greater than a google?

      • nine_k a day ago ago

        The idea is to do it once. A lot can likely be compressed, while keeping access time log-linear. Store it passively, so that it won't need power except for reading, like mask ROM or CD-ROM do not.

      • charcircuit a day ago ago

        Faster than exp(((64/9)^1/3 + o(1))*((log n)^1/3 (log log n)^2/3)). The time building the table is not counted as that happens ahead of time.

    • CodesInChaos 9 hours ago ago

      No cryptographer cares about time-complexity on its own. Even the naive asymptotic cost-model is `t * (P + M)`, where P is the number of processors and M the amount of memory (including ROM/code-size). And a more serious cost-model is `t * A` where A is the chip area (~transistor count). This considers less obvious costs, like the size of the memory access circuitry, which can be substantial when you have a large number of parallel processors.

      In any of these models the time would be multiplied with the size of the lookup table, resulting in a cost much higher that number-field-sieve.

      Plus you need to consider the (amortized) cost of populating the lookup table.

    • mikewarot a day ago ago

      Ok, where are you going to keep those yottabytes of tables?