Problem Overview
You are given a non-negative integer n. For every number i in the range [0, n], you need to calculate how many 1s are present in the binary representation of i.
Return the result as an integer array ans of length n + 1, where ans[i] represents the number of 1s in the binary form of i.
Example 1:
Input: n = 2
Output: [0, 1, 1]
Explanation:
0in binary →0→ number of 1s = 01in binary →1→ number of 1s = 12in binary →10→ number of 1s = 1
Example 2:
Input: n = 5
Output: [0, 1, 1, 2, 1, 2]
Explanation:
0 → 0b0 → 01 → 0b1 → 12 → 0b10 → 13 → 0b11 → 24 → 0b100 → 15 → 0b101 → 2
The goal is to compute all counts efficiently — not by recalculating for every number from scratch.
Intuition / Approach
A naïve approach would be to use the built-in method to count bits in each number (for example, by checking every bit position). However, this would be O(n × number_of_bits), which is inefficient for large n.
To optimize, we use Dynamic Programming based on the relationship between numbers and their binary patterns.
Key Observations:
- The number of 1s in a number
iis closely related to the number of 1s in a smaller number. - Specifically:
- If
iis even, then the number of 1s iniis the same as ini / 2, because right-shifting removes a trailing 0.
Example:2 (10)→ same 1s as1 (1)
- If
iis odd, then the number of 1s iniis one more than ini - 1, because the last bit is 1.
Example:3 (11)→ one more 1 than2 (10)
- If
Hence, we can derive the formula:
bits[i] = bits[i >> 1] + (i & 1)
Where:
i >> 1means integer division by 2 (right shift)(i & 1)checks if the last bit is 1 (adds one if odd)
This relationship lets us compute the result in O(n) time.
Algorithm Explanation
- Initialize an array
bitsof sizen + 1to store results. - Set
bits[0] = 0since binary representation of 0 has no 1s. - For each number
ifrom1ton:- Compute
bits[i] = bits[i >> 1] + (i & 1)
- Compute
- Return the
bitsarray.
This approach builds the result iteratively using previously computed values, avoiding redundant bit calculations.
Java Code Implementation
public class CountingBits {
public int[] countBits(int n) {
int[] bits = new int[n + 1];
bits[0] = 0; // Base case: 0 has no set bits
for (int i = 1; i <= n; i++) {
bits[i] = bits[i >> 1] + (i & 1);
}
return bits;
}
// Optional main method for testing
public static void main(String[] args) {
CountingBits obj = new CountingBits();
int n = 5;
int[] result = obj.countBits(n);
System.out.print("Output: [");
for (int i = 0; i < result.length; i++) {
System.out.print(result[i]);
if (i < result.length - 1) System.out.print(", ");
}
System.out.println("]");
}
}
Dry Run Example
For n = 5:
| i | Binary | i >> 1 | i & 1 | bits[i >> 1] | bits[i] = bits[i >> 1] + (i & 1) |
|---|---|---|---|---|---|
| 0 | 0000 | 0 | 0 | 0 | 0 |
| 1 | 0001 | 0 | 1 | 0 | 1 |
| 2 | 0010 | 1 | 0 | 1 | 1 |
| 3 | 0011 | 1 | 1 | 1 | 2 |
| 4 | 0100 | 2 | 0 | 1 | 1 |
| 5 | 0101 | 2 | 1 | 1 | 2 |
Result: [0, 1, 1, 2, 1, 2]
Complexity Analysis
- Time Complexity:
O(n)
We iterate once through all numbers from 1 ton, and each step takes constant time. - Space Complexity:
O(n)
We use an array of sizen + 1to store the bit counts.
Alternative Approaches
- Built-in Bit Counting (Naïve):
- Use
Integer.bitCount(i)for each number from 0 ton. - Complexity: O(n × number_of_bits), slower for large
n.
- Use
- Brian Kernighan’s Algorithm:
- For each number, repeatedly turn off the rightmost set bit until the number becomes zero.
- Slightly faster per number than checking each bit, but still O(n log n).
- DP Based on Power of Two Ranges:
- Use intervals between powers of two, where each range reuses results from the previous one.
- Slightly more complex but conceptually similar to the bit shift method.
Final Thoughts
This problem elegantly demonstrates how dynamic programming can optimize repetitive bit computations by using previously known results.
The formula bits[i] = bits[i >> 1] + (i & 1) is the most efficient and elegant solution, running in linear time and using minimal extra space.
It also teaches how bitwise operations can simplify mathematical relationships, a concept frequently useful in performance-critical programming challenges.
