Returning maximum number of consecutive 1s in list of binary numbers

Kindly review the code, strictly :smiley:

Well, if I had to rewrite this, I would rewrite it as such, which uses a similar approach:

fn print_values(nums: &[i32]) -> i32 {
    let (max, end) = nums
            (0, 0),
            |(max, mut current), x| 
                if x == 1 {
                    current += 1;
                    (max, current)
                } else {
                    (max.max(current), 0)

This is similar to what you've done. Here're my notes on your approach:

  • if nums.len() < 10000 {

    This shouldn't be here; this algorithm should theoretically be really fast :slight_smile:

  • if *num == 1 && index != nums.len()

    index will never be nums.len(), since the enumerate iterator adaptor starts at 0:

    ['a', 'b', 'c']
        .for_each(|(_character, index)| println!("{}", index));

    Will print

  • if index == nums.len()-1 && count > 0

    This will have to be run on every iteration of the loop. Why not just do it at the end of the loop, outside of it?

  • // what's the difference if  we put semicolon here at end or what if not 

    panic! will return a special type called the never type (Written as an exclamation point: !), and it signifies that something will never exist. In this case, there will never exist an "end" to a panic (or, some way to continue code afterwards), and therefore it returns the ! type. Since you will never get past a semantic instantiation of the ! type, it can be automatically converted to anything:

    let x: String = panic!();

    Is valid code, since x never gets assigned anything.
    In this case, note that putting the semicolon makes the statement return () (The unit type, somewhat like void in other languages), whereas not having the semicolon makes the statement return ! (The never type), which can then become anything.

As to how I'd improve the speed? Nothing really, unless you can predict certain attributes about your data:

  • Are the 1s clumped together in an enormous dataset? Perhaps a binary search approach would be useful by looking at every N items where N is successively smaller powers of 2, finding 1s and then testing the left/right ranges for 1s. In this case you should also take into account the size of the spaces between what has already been searched, so as to stop searching for successive ones once you have found a cluster larger than your current step size.
  • Are the 1s spaced out sporadically in a large dataset? Then perhaps dividing the dataset into chunks, each for a thread, would yield good results by finding a list of potential indices for a 1-cluster to begin. Collecting these, sorting, and then processing could easily yield a faster result.

Pretty much the same as @OptimisticPeach but with ifs instead of fold and max().

Most other comments I thought of have already been covered by their nice reply. However you don't need to iterate with references (using nums.iter().enumerate()) since you don't need the Vec after the loop. Instead you can consume the Vec and iterate over values using nums.into_iter().enumerate(). Additionally, the enumerate() was unnecessary, leaving:

for num in nums.into_iter()

Which is equivalent to

for num in nums

There is likely no practical difference here (any differences are likely optimized away), but consuming the Vec is the cleaner version in my opinion.

(Note that @OptimisticPeach's version takes a &[i32] and not a Vec<i32>, which is also more idiomatic if you have the ability to change the function signature. These notes about consuming Vec don't apply to that version.)


Thanks alot dear! for your kind guidance. :heart:

Got it, Thanks dear :heart:

For comparison, here's an iterator-based solution that I thought was interesting:

pub fn print_values(nums: Vec<i32>) -> i32 {
    nums.split(|x| *x != 1)
        .map(|v| v.len() as i32)

cool solution... :+1:

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.