Code Coverage for I/O, Structs; Static Analysis

So, I'm doing a very simple project for a command line chess move generator which takes input from the command line in the form of a list of pieces and generates the possible moves for a given piece. For this project the goal is to have as close to 100% line coverage as possible.

Question 1:
I have the following for taking input from the command line in the form "Rf1, Kg1, Pf2, Ph2, Pg3" (color is specified by prompt):

mod command_line_input {
    use std::io;
    pub fn get_input(prompt: &str) -> String{
        println!("{}",prompt);
        let mut input = String::new();
        match io::stdin().read_line(&mut input) {
            Ok(_goes_into_input_above) => {},
            Err(_no_updates_is_fine) => {},
        }
        input.trim().to_string()
    }
}

which I am completely unsure of how to properly test.

Question 2:
Additionally, I have the following Struct and implementation:

#[derive(Copy, Clone, Debug)]
struct ChessPiece {
    color: bool, // white = false; black = true
    name: char, // K, Q, R, B, N, and P to identify the King, Queen, Rook, Bishop, Knight, and Pawn respectively
    x_position: u8, // a = 1, ..., h = 8
    y_position: u8, // [1, 8]

}

impl ChessPiece {
    pub fn new(color: bool, piece_input: &str) -> Self {
        let piece = Self { color,
            name: piece_input.chars().nth(0).unwrap(),
            x_position: grid_char_to_grid_number_letter(piece_input.chars().nth(1).unwrap()),
            y_position: grid_char_to_grid_number_number(piece_input.chars().nth(2).unwrap())
        };
        piece
    }
}

with the following test:

#[test]
    fn test_piece_new() {
        let white_king_a_1 = ChessPiece {color:false, name: 'K', x_position: 1 as u8, y_position: 1 as u8};
        let king_a_1_str = "Ka1";
        let parsed_piece = ChessPiece::new(false, king_a_1_str);

        assert_eq!(parsed_piece.color, white_king_a_1.color);
        assert_eq!(parsed_piece.name, white_king_a_1.name);
        assert_eq!(parsed_piece.x_position, white_king_a_1.x_position);
        assert_eq!(parsed_piece.y_position, white_king_a_1.y_position);

I am using the code coverage tool in Jetbrains CLion and it reports that the implementation
part of the struct is covered, but the definition of the struct is not. How do I remedy this?

Question 3:
Is there a more strict than standard setting for cargo? I understand it already does a decent amount of static analysis during compilation, but I'm looking to have as much analysis done as is reasonable.

Thanks in advance!

Edit: Should this be here or in "Code Review"?

The only practical way to test something which is hard-coded to read from stdin is by running the entire application as an integration test. I've used the assert_cmd crate for testing overall CLI applications with a lot of success.

Alternatively, you could change your get_input() function so it accepts something implementing std::io::BufRead (which a locked io::stdin() does). That way you can substitute it for a std::io::Cursor<[u8]> which contains some pre-defined input, then check the string you get is what you expect.

I'm guessing Jetbrains is actually noticing that the Debug, Clone, and Copy implementations aren't being tested, and the diagnostic is attached to the ChessPiece's definition because that's the span given to code generated by a #[derive] macro.

To be honest, if it were a real world application I wouldn't bother testing that sort of thing. You can normally assume that derive macros are correct, but if you want that section to appear as tested it's simple enough to add a test which clones an object then uses the == operator (from #[derive(PartialEq)]) to make sure they are equal.

You generally shouldn't be writing tests for the Debug implementation because that's meant to give purely a human-readable representation of the ChessPiece. It should be considered a black box by the rest of the program and any changes to the ChessPiece definition will affect it's Debug representation.

I know this is meant as an educational exercise, but your comment reminds me of Goodhart's law: "When a measure becomes a target, it ceases to be a good measure"... By aiming for the goal of 100% coverage you lose sight of why you want good test coverage in the first case.

Coverage is a tool to help you know which parts of your code base are being tested and which parts aren't. This lets you infer where the less reliable or more complicated/coupled parts are because people tend to be lazy and not test complicated things that require a lot of setup.

Check out clippy.

While rustc's builtin lints are normally fairly objective and try to avoid false-positives, clippy contains more opinionated lints that help identify possible problems or anti-patterns, and guide you towards writing more idiomatic code.

I've thought something similar several times during this process, but yes it is an educational exercise, so I'm putting far more tests than truly necessary. Ideally there would be more focus on edge cases and the like and less on the philosophy of "Run every line of code once", but that's not my specification :laughing:

This definitely does the job, thanks!

This is exactly the sort of thing I'm looking for.

Out of curiosity, have you happened to use this:
Source Based Code Coverage

I'm a bit wary of it because it's in unstable, and I don't know how to use Z-language, which I believe it works with.

Code coverage of 70~90% usually means the maintainers cares the unit testing much, but 100% could be the sign that they're optimizing the code coverage not improving unit tests. It's common in this mode to write poor unit tests which doesn't contribute to the robustness of the code but still contribute to the code coverage.

I found this blog post is worth reading.

2 Likes