error[E0277]: the type `str` cannot be indexed by `usize`

For some reason whenever I try to collect a "word" from a String type variable, with variable n, which is a number, with data[n], and then I try to put an item from a mutable struct, type Vec. I get this error:

error[E0277]: the type `str` cannot be indexed by `usize`
  --> src/config_parser.rs:38:60
   |
38 |             TokenType::COMMENT => tokens.comment.push(data[n]),
   |                                                            ^ string indices are ranges of `usize`
   |
   = help: the trait `SliceIndex<str>` is not implemented for `usize`
           but trait `SliceIndex<[_]>` is implemented for it
   = help: for that trait implementation, expected `[_]`, found `str`
   = note: required for `String` to implement `Index<usize>`

All Code:

use crate::def_lib::ConfFile;


// the data_code_utf8 is a Vec<u8> variable

pub fn start_parser(data_code_utf8: ConfFile) {
    // Decode the UTF-8
    let data_decoded_utf8 = String::from_utf8(data_code_utf8.config_read).unwrap();
    error_checker(data_decoded_utf8);
}

struct Tokens {
    illegal: Vec<String>,
    legal: Vec<String>,
    comment: Vec<String>,
}

enum TokenType {
    // ILLEGAL is an invalid token
    ILLEGAL,
    // LEGAL is a valid token
    LEGAL,
    // COMMENT is for example "// Foo" 
    COMMENT,
}

fn error_checker(data: String) {
    let mut next: i32   = 0;
    // Set the varariable tokens make Tokens (struct) functional 
    let mut tokens = Tokens {
        illegal: vec!["".to_string()],
        legal:   vec!["".to_string()],
        comment: vec!["".to_string()],
    };
    // Count the strings and then check them one by one
    for n in 0..data.split_whitespace().count().clone() {
        match analyze_token(data[n]) {
            TokenType::LEGAL   => tokens.legal.push(data[n]),
            TokenType::ILLEGAL => tokens.illegal.push(data[n]),
            TokenType::COMMENT => tokens.comment.push(data[n]),
        }
        
    }
}



fn analyze_token(token: String) -> TokenType {
    if token == "add_book_path".to_string() {
        return TokenType::LEGAL;
    } else if token == "//".to_string() {
        return TokenType::COMMENT;
    } else { 
        // ILLEGAL is an invalid token
        return TokenType::ILLEGAL;
    }
}

String (and str) are made up of UTF-8 bytes, not tokens, so they have no idea what to do with your n value. In order to process words/tokens you need to make use of the &strs that are produced by split_whitespace(), not merely count them. Like this:

fn error_checker(data: String) {
    let mut next: i32 = 0;
    // Set the varariable tokens make Tokens (struct) functional
    let mut tokens = Tokens {
        illegal: vec![],
        legal: vec![],
        comment: vec![],
    };
    for token in data.split_whitespace() {
        match analyze_token(token) {
            TokenType::LEGAL => tokens.legal.push(token.to_string()),
            TokenType::ILLEGAL => tokens.illegal.push(token.to_string()),
            TokenType::COMMENT => tokens.comment.push(token.to_string()),
        }
    }
}

fn analyze_token(token: &str) -> TokenType {
    if token == "add_book_path" {
        return TokenType::LEGAL;
    } else if token == "//" {
        return TokenType::COMMENT;
    } else {
        // ILLEGAL is an invalid token
        return TokenType::ILLEGAL;
    }
}
3 Likes