Write a multiply function that multiples 2 integers without using “*”.
2026 Update: Integer Multiplication — Without Using Multiply Operator
Multiplying integers without using * operator is a classic interview problem that tests knowledge of bit manipulation and binary arithmetic.
def multiply_naive(a: int, b: int) -> int:
"""Multiply using repeated addition — O(b) time."""
if b int:
"""
Multiply using bit shifting — O(log b) time.
Based on binary representation of b:
b = b_k*2^k + ... + b_1*2^1 + b_0*2^0
a*b = a*(b_k*2^k) + ... + a*(b_0*2^0)
Each a*2^i = a << i (left shift)
"""
if b 0:
if b & 1: # If lowest bit of b is set
result += a
a <>= 1 # b = b // 2 (right shift)
return result
def multiply_karatsuba(x: int, y: int) -> int:
"""
Karatsuba algorithm — O(n^1.585) vs naive O(n^2) for n-digit numbers.
Divide-and-conquer: reduces to 3 multiplications instead of 4.
"""
if x < 10 or y < 10:
return x * y # Base case
n = max(len(str(x)), len(str(y)))
m = n // 2
# Split x and y
a, b = divmod(x, 10**m) # x = a * 10^m + b
c, d = divmod(y, 10**m) # y = c * 10^m + d
# 3 recursive multiplications
ac = multiply_karatsuba(a, c)
bd = multiply_karatsuba(b, d)
ad_plus_bc = multiply_karatsuba(a + b, c + d) - ac - bd
return ac * 10**(2*m) + ad_plus_bc * 10**m + bd
# Test
pairs = [(3, 4), (12, 13), (100, 200), (-5, 7), (0, 999)]
for a, b in pairs:
naive = multiply_naive(a, b)
bitshift = multiply_bit_shift(a, b)
kara = multiply_karatsuba(abs(a), abs(b)) * (-1 if (a < 0) != (b < 0) else 1)
ok = naive == bitshift == a * b
print(f"{'OK' if ok else 'FAIL'}: {a} * {b} = {naive}")
# Complexity comparison for n-digit numbers:
print(f"nGrade-school: O(n^2)")
print(f"Karatsuba: O(n^1.585)")
print(f"FFT-based: O(n log n) -- used by Python for large integers")
Why Python’s big integers use FFT multiplication: For very large numbers (thousands of digits), Python internally uses the Schonhage-Strassen algorithm (FFT-based) for multiplication — O(n log n log log n). This is why 2**100000000 in Python is fast. Karatsuba is the middle ground (1000-100000 digit numbers). Understanding algorithmic complexity for basic operations is key to predicting performance in number-theory and cryptography applications.
💡Strategies for Solving This Problem
Without Using Multiplication Operator
This showed up at Amazon. The twist: "Don't use *, /, or library functions." It's testing if you understand what multiplication really is.
Approach 1: Repeated Addition
3 × 4 = 3 + 3 + 3 + 3 (add 3 four times)
O(b) time where b is the second number. Works but slow for large numbers.
Approach 2: Bit Manipulation (Russian Peasant)
The clever approach using bit shifts:
- a × b = a × (b/2 + b/2) if b is even
- a × b = a + a × (b-1) if b is odd
This is O(log b) time - way better.
Why Bit Manipulation
Think of b in binary: 5 = 101₂ = 4 + 1
So a × 5 = a × 4 + a × 1 = (a << 2) + a
We can decompose any multiplication into additions of bit-shifted values.
Edge Cases
- Negative numbers (handle sign separately)
- Zero (return 0 early)
- Overflow (depends on language)
My interviewer specifically asked about negatives. I forgot to handle them initially - don't make that mistake.