Compute x^y for Floats and Negative Exponents: Fast Exponentiation

Compute x^y for Floats and Negative Exponents: Fast Exponentiation

Computing pow(x, y) efficiently is LeetCode #50 and a frequently-asked interview problem. The naive solution multiplies y times, which is O(y); the elegant solution uses fast exponentiation (binary exponentiation / repeated squaring) for O(log y). Interviewers also expect candidates to handle negative exponents, floating-point bases, integer overflow, and edge cases. This guide covers the standard approaches with working code, the recursive vs iterative variants, and the variations that come up as follow-ups.

Problem Statement

Implement pow(x, n), which computes x raised to the power n, without using language built-ins. Handle:

  • x: floating-point base
  • n: integer exponent (positive, negative, or zero)
  • Edge cases: x = 0, n = 0, n < 0

Examples:

  • pow(2.0, 10) → 1024.0
  • pow(2.1, 3) → 9.261
  • pow(2.0, -2) → 0.25
  • pow(0.0, 5) → 0.0
  • pow(1.0, 999) → 1.0

Approach 1: Naive Multiplication (O(n))

def my_pow_naive(x: float, n: int) -> float:
    """O(|n|) time, O(1) space."""
    if n < 0:
        x = 1 / x
        n = -n
    result = 1.0
    for _ in range(n):
        result *= x
    return result

Acceptable as a warm-up. For n = 2³¹, this is ~2 billion multiplications — too slow. Interviewers will push for O(log n).

Approach 2: Recursive Fast Exponentiation (Standard Answer)

Key insight: x^n = (x^(n/2))² if n is even; x × x^(n-1) if n is odd. Each recursion halves n.

def my_pow_recursive(x: float, n: int) -> float:
    """O(log |n|) time, O(log |n|) recursion stack."""
    if n < 0:
        return my_pow_recursive(1 / x, -n)
    if n == 0:
        return 1.0
    half = my_pow_recursive(x, n // 2)
    if n % 2 == 0:
        return half * half
    return half * half * x


# Tests
print(my_pow_recursive(2.0, 10))   # 1024.0
print(my_pow_recursive(2.1, 3))    # 9.261
print(my_pow_recursive(2.0, -2))   # 0.25
print(my_pow_recursive(1.0, 999))  # 1.0

Complexity: O(log |n|) time and O(log |n|) recursion stack space.

Why this works: 2^10 = (2^5)^2 = ((2^2 × 2)^2)^2. Each squaring step halves the remaining exponent. log₂(10) ≈ 3.3 squarings instead of 10 multiplications.

Approach 3: Iterative Fast Exponentiation

Same logic without recursion. Process the binary representation of n: for each bit, square the accumulator; if the bit is set, multiply by x.

def my_pow_iterative(x: float, n: int) -> float:
    """O(log |n|) time, O(1) space."""
    if n < 0:
        x = 1 / x
        n = -n
    result = 1.0
    while n > 0:
        if n & 1:
            result *= x
        x *= x
        n >>= 1
    return result

Complexity: O(log |n|) time, O(1) space — strictly better than recursive.

For interview purposes, either is acceptable. Iterative avoids the recursion-stack-depth concern; recursive reads more naturally for some.

Common Edge Cases

  • n = 0. Any x raised to 0 is 1, including 0^0 (mathematically debated; most implementations return 1).
  • x = 0 and n < 0. Division by zero. Either raise an exception or return infinity (depends on language convention).
  • n = INT_MIN. Negating overflows for 32-bit integers; -INT_MIN > INT_MAX. In Python, no issue (arbitrary precision); in C++/Java, special-case it: use long or compute 1/x × my_pow(x, -(n+1)).
  • Very large n. 2^(2^31) overflows float; but the algorithm doesn’t care about overflow during the computation itself in IEEE-754. Handle infinity sensibly if asked.
  • x = 1 or x = -1. Result is always 1 or ±1; don’t compute n iterations. Some implementations short-circuit these for efficiency, but the asymptotic gain is negligible.

Common Variations

Modular exponentiation

Compute x^n mod m. Same fast-exp algorithm with mod after every multiplication. Used in cryptography (RSA, Diffie-Hellman) and number theory.

def mod_pow(x: int, n: int, m: int) -> int:
    """O(log n) modular exponentiation."""
    result = 1
    x %= m
    while n > 0:
        if n & 1:
            result = (result * x) % m
        x = (x * x) % m
        n >>= 1
    return result

Python’s built-in pow(x, n, m) does this directly with the optional third argument.

Matrix exponentiation

Compute M^n for a square matrix M. Same algorithm; replace scalar multiplication with matrix multiplication. Used in solving linear recurrences (Fibonacci in O(log n) time).

Square root via fast exponentiation

Inverse problem: compute x^(1/2). Use Newton’s method or bisection rather than fast exp; fast exp is for integer exponents specifically.

Fast Fibonacci

F(n) = matrix-power applied to [[1,1],[1,0]]^n. O(log n) time using matrix fast-exp. Substantially faster than naive O(n) iteration for very large n (n > 10^6).

Common Mistakes

  • Recomputing x^(n/2) twice. A naive recursive version computing my_pow(x, n/2) * my_pow(x, n/2) for even n is O(n) instead of O(log n). Always store the recursive call in a variable and reuse.
  • Floating-point precision drift. Repeated squaring of imperfect floats accumulates error. For high-precision requirements, use decimal.Decimal in Python or arbitrary-precision libraries. For typical interview answers, IEEE-754 is fine.
  • Wrong handling of negative n. The standard pattern: convert x to 1/x and negate n. Forgetting this causes incorrect results for negative exponents.
  • Integer overflow on negation. In C++/Java, -INT_MIN overflows. Special-case it or use a wider type. Python doesn’t have this issue.
  • Forgetting the n = 0 base case. Without it, recursive version doesn’t terminate. Always include if n == 0: return 1.
  • Returning int when float is expected. pow(2.0, 10) should return 1024.0, not 1024. Use 1.0 as the initial result and float arithmetic throughout.

Frequently Asked Questions

What’s the expected interview answer?

Fast exponentiation, O(log |n|). Either recursive or iterative is fine; iterative is slightly preferred for the O(1) space. Walk through the algorithm with a small example (compute 2^10 in 4 squarings: 2 → 4 → 16 → 256 → 1024 with appropriate multiplications). Handle negative exponents and the edge cases. The interviewer is checking algorithmic insight (recognizing the binary structure of the exponent), not raw coding speed.

How does fast exponentiation relate to RSA / cryptography?

RSA encryption is essentially modular exponentiation: c = m^e mod n. Modulus values are ~2048 bits; exponents likewise. Without fast exponentiation, this would take 2^2048 multiplications — impossible. With fast exp, it’s 2048 squarings and at most 2048 multiplications — milliseconds. Fast exp is what makes RSA practical.

What’s the difference between O(log n) and O(n) for this problem?

For n = 2^31 (~2 billion), O(n) is ~2 billion operations (~seconds); O(log n) is ~31 operations (~microseconds). Massive practical difference. For interview-friendly small inputs, both are instant; for production cryptography or matrix-power solutions to recurrences, the gap matters.

How do I handle very high exponents?

For arbitrary-large n (e.g., n > 10^18), the iterative algorithm still works since it processes log n bits. Python’s arbitrary-precision integers make this trivial. Languages with fixed integer width require long long or BigInt types.

What’s the relationship to logarithms?

Fast exponentiation processes log₂ n bits. The “logarithm” in the algorithm name is the same logarithm relationship: time complexity equals bit-length of n equals log₂ n. The connection to mathematics is direct.

See also: Check if a Number Is a Power of TwoTwo Sum: Find a Pair in an ArrayDetect a Cycle in a Linked List

💡Strategies for Solving This Problem

Power Function Implementation

Got this at Bloomberg in 2023. They wanted to see if I could handle edge cases properly, not just implement the happy path. Tests understanding of floating point, negative exponents, and numerical stability.

The Problem

Implement pow(x, y) that handles:

  • Negative bases
  • Negative exponents
  • Floating point exponents
  • Edge cases (0^0, negative^fraction)
  • Overflow/underflow

Approach 1: Naive Multiplication

For integer exponents, multiply x by itself y times:

pow(2, 5) = 2 * 2 * 2 * 2 * 2

Problems:

  • O(n) time complexity
  • Doesn't handle negative or float exponents
  • Slow for large exponents

Approach 2: Recursive Squaring (Fast Exponentiation)

For integer exponents, use divide and conquer:

pow(x, 8) = pow(x, 4) * pow(x, 4)
pow(x, 4) = pow(x, 2) * pow(x, 2)
pow(x, 2) = x * x

This reduces O(n) to O(log n). Handle odd exponents:

pow(x, 9) = x * pow(x, 8)

Approach 3: Iterative Binary Exponentiation

Same idea as recursive but iterative. Process bits of exponent from right to left. For each bit, square the result. If bit is 1, multiply by base.

Example: 2^13 where 13 = 1101 in binary

result = 1
2^1 = 2 (bit 0 is 1) → result = 2
2^2 = 4 (bit 2 is 0) → result = 2
2^4 = 16 (bit 3 is 1) → result = 2 * 16 = 32
2^8 = 256 (bit 4 is 1) → result = 32 * 256 = 8192

Handling Special Cases

Negative exponents: pow(x, -y) = 1 / pow(x, y)

Negative bases with fractions: pow(-2, 0.5) = sqrt(-2) = complex number (undefined in real numbers)

Zero cases:

  • 0^positive = 0
  • 0^0 = undefined (we'll return 1 by convention)
  • 0^negative = infinity (undefined)

Float exponents: Use logarithms: x^y = e^(y * ln(x))

At Bloomberg

I started with naive approach. Interviewer said "too slow for large exponents." Then I did recursive squaring. He asked about floating point exponents. That's when I mentioned the logarithm approach. Then we discussed edge cases for 30 minutes - negative bases, zero, overflow, etc.

Scroll to Top