Learnitweb

LeetCode Problem 338: Counting Bits

Problem Overview

You are given a non-negative integer n. For every number i in the range [0, n], you need to calculate how many 1s are present in the binary representation of i.
Return the result as an integer array ans of length n + 1, where ans[i] represents the number of 1s in the binary form of i.

Example 1:
Input: n = 2
Output: [0, 1, 1]
Explanation:

  • 0 in binary → 0 → number of 1s = 0
  • 1 in binary → 1 → number of 1s = 1
  • 2 in binary → 10 → number of 1s = 1

Example 2:
Input: n = 5
Output: [0, 1, 1, 2, 1, 2]
Explanation:

  • 0 → 0b0 → 0
  • 1 → 0b1 → 1
  • 2 → 0b10 → 1
  • 3 → 0b11 → 2
  • 4 → 0b100 → 1
  • 5 → 0b101 → 2

The goal is to compute all counts efficiently — not by recalculating for every number from scratch.


Intuition / Approach

A naïve approach would be to use the built-in method to count bits in each number (for example, by checking every bit position). However, this would be O(n × number_of_bits), which is inefficient for large n.

To optimize, we use Dynamic Programming based on the relationship between numbers and their binary patterns.

Key Observations:

  1. The number of 1s in a number i is closely related to the number of 1s in a smaller number.
  2. Specifically:
    • If i is even, then the number of 1s in i is the same as in i / 2, because right-shifting removes a trailing 0.
      Example:
      • 2 (10) → same 1s as 1 (1)
    • If i is odd, then the number of 1s in i is one more than in i - 1, because the last bit is 1.
      Example:
      • 3 (11) → one more 1 than 2 (10)

Hence, we can derive the formula:

bits[i] = bits[i >> 1] + (i & 1)

Where:

  • i >> 1 means integer division by 2 (right shift)
  • (i & 1) checks if the last bit is 1 (adds one if odd)

This relationship lets us compute the result in O(n) time.


Algorithm Explanation

  1. Initialize an array bits of size n + 1 to store results.
  2. Set bits[0] = 0 since binary representation of 0 has no 1s.
  3. For each number i from 1 to n:
    • Compute bits[i] = bits[i >> 1] + (i & 1)
  4. Return the bits array.

This approach builds the result iteratively using previously computed values, avoiding redundant bit calculations.


Java Code Implementation

public class CountingBits {
    public int[] countBits(int n) {
        int[] bits = new int[n + 1];
        bits[0] = 0; // Base case: 0 has no set bits

        for (int i = 1; i <= n; i++) {
            bits[i] = bits[i >> 1] + (i & 1);
        }

        return bits;
    }

    // Optional main method for testing
    public static void main(String[] args) {
        CountingBits obj = new CountingBits();
        int n = 5;
        int[] result = obj.countBits(n);

        System.out.print("Output: [");
        for (int i = 0; i < result.length; i++) {
            System.out.print(result[i]);
            if (i < result.length - 1) System.out.print(", ");
        }
        System.out.println("]");
    }
}

Dry Run Example

For n = 5:

iBinaryi >> 1i & 1bits[i >> 1]bits[i] = bits[i >> 1] + (i & 1)
000000000
100010101
200101011
300111112
401002011
501012112

Result: [0, 1, 1, 2, 1, 2]


Complexity Analysis

  • Time Complexity:
    O(n)
    We iterate once through all numbers from 1 to n, and each step takes constant time.
  • Space Complexity:
    O(n)
    We use an array of size n + 1 to store the bit counts.

Alternative Approaches

  1. Built-in Bit Counting (Naïve):
    • Use Integer.bitCount(i) for each number from 0 to n.
    • Complexity: O(n × number_of_bits), slower for large n.
  2. Brian Kernighan’s Algorithm:
    • For each number, repeatedly turn off the rightmost set bit until the number becomes zero.
    • Slightly faster per number than checking each bit, but still O(n log n).
  3. DP Based on Power of Two Ranges:
    • Use intervals between powers of two, where each range reuses results from the previous one.
    • Slightly more complex but conceptually similar to the bit shift method.

Final Thoughts

This problem elegantly demonstrates how dynamic programming can optimize repetitive bit computations by using previously known results.
The formula bits[i] = bits[i >> 1] + (i & 1) is the most efficient and elegant solution, running in linear time and using minimal extra space.
It also teaches how bitwise operations can simplify mathematical relationships, a concept frequently useful in performance-critical programming challenges.