Rustlings - I didn't know this!

I'm working through Rustlings at the moment and after answering iterators3 I compared my answer for the divide function with rustlings solutions. They use a traditional if-else block while mine was mainly using match.

I then tried seeing what the various Chat AIs would come up with, and then adapted their answers to what was expected in rustlings. They didn't all get it exactly right but the slickest answer was this from ChatGPT.

fn divide(a: i32, b: i32) -> Result<i32, DivisionError> {
    match b {
        0 => Err(DivisionError::DivideByZero),
        _ if a % b == 0 => Ok(a / b),
        _ => Err(DivisionError::NotDivisible(NotDivisibleError { dividend: a, divisor: b })),

Otoh Google Bard (and others, including Bing Chat) used a nested match.

fn divide(a: i32, b: i32) -> Result<i32, DivisionError> {
    match b {
        0 => Err(DivisionError::DivideByZero),
        _ => match a % b {
            0 => Ok(a / b),
            _ => Err(DivisionError::NotDivisible(NotDivisibleError { dividend: a, divisor: b })),

I never knew you could have more than one catch-all arm "_" as in the ChatGPT solution.
You learn something new every day! :slightly_smiling_face:


A few things to note:

  1. Technically, your first arm with an underscore, i.e. _ if a % b == 0 is not a catch-all arm since it's conditional on matching an if statement. Therefore, there is only one true catch-all in that code fragment.

  2. Multiple catch-alls are actually allowed in a match construct. The compiler will simply use the first matching pattern and ignore the subsequent arms. However, this isn't good practice and the compiler will give you an unreachable pattern warning for the arms that will never be used.

  3. ChatGPT responses are not allowed in the forums, I believe. Just getting in there before the moderators remind us. :grinning:


I'd say there's a significant difference between discussing one's experience with ChatGPT, or maybe even asking questions about answers from it, and quoting responses for that purpose on one hand, and using ChatGPT to answer another person's question without adding much of your own value to it on the other hand. Even more so, this is not even quoting ChatGPT response text, but only some code examples (it generated / or that were written with it's help) which seems even less of a problem to me :slight_smile:

In any case, given we still have little experience with how ChatGPT content and this forum interact, even I as moderator am not entirely sure what our exact take on the matter is. I can find this rule in the TOS

By making Content available, you represent and warrant that:

  • the Content is not spam, is not machine- or randomly-generated, and does not contain unethical or unwanted commercial content [etc...]

which I would understand means in particular machine-generated content that is not clearly marked as such (i. e. pretending not to be machine generated), and perhaps posts consisting entirely of machine generated content.

Well anyways, at least my personal common sense take on this is that as long as ChatGPT content is clearly marked as such, you're not in any trouble; and we'll figure out the details and kindly tell you, in case ChatGPT is overused in a way that negatively impacts the forum discussion, until a more clear precedent on best practices is established.


Why not? This was just to illustrate a learning point for myself, rather than an answer to a question.

1 Like

I think StackOverflow has banned ChatGPT generated code, but I didn't think this forum had decided anything specific yet.

1 Like

I assume @hax10 might simply have seen a different recent discussion where some full ChatGPT reponses to another user's question were quoted (it was properly marked as such, so even that doesn't seem like any particularly bad violation of rules to me) without adding much additional value to it, so a moderator commended noting that there's a rule that one is not allowed to "post ChatGPT responses here" (I would assume referring to the rule I quoted above, unless I'm missing some other rules).

Your post seems entirely fine to me, as I explained above, and in case there was any rule seeing a problem with your post, I will do my best to get such a rule adjusted.


Afterall, compiler errors and lints are machine generated too :sweat_smile:


I think having people blindly cut and past ChatGPT generated code as a solution to someones questions would not be cool.

However a comparison and discussion about them, as here, seems quite reasonable.

Also if ChatGPT comes up with a useful trick, a cunning way of doing something that is genuinely something that few humans know about or even entirely unthought of by any human so far it's worthy of presenting here.

Both of the above seem to apply to the OP here to some extent.


Yes, and they are often incorrect. That's probably how ChatGPT generated content should be treated: it's not supposed to be banned completely, but it can only be used if the one who cites ChatGPT verified that ChatGPT is correct.

Here is my own ChatGPT session:

Would you want that being presented anywhere, hmm?

Now, if you would shame ChatGPT it would present you with surprisingly useful answer:

IOW: it's perfectly fine to use ChatGPT to discover things but always verify your discoveries! And never imply that ChatGPT is authoritative source. It's not. It's hallucinations are surprisingly vivid and convincing! Never trust what ChatGPT is saying, but if you verified and found the presented information correct… it can be useful, sure. It “knows” more than me or you know… it just likes to forget about some things or even make up new plausible “truth”.


Yes, that is my take on it. In fact, the code snippets I quoted weren't exactly from the Chat AIs. I tweaked them to match what was expected in the Rustlings exercise. I also tested what they generated. In actual fact, the initial version from Bard was not quite right.

A problem with ChatGPT right now is that it does not provide any references (that is due to change). But Bing Chat, because it is connected to the web, does do so where applicable.

Also, you can copy/paste the answer and, when pasted, it states "generated by Bing Chat" along with any links. Some of the other AIs are similar in that respect.

They also generally provide an explanation of how the code they've generated works and sometimes example calls of the code.

But even then, as I've discovered with Python, the generated code can still be wrong, so you have to keep your brain engaged. :slightly_smiling_face:

Though my OP was primarily a discussion topic rather than an answer to a question, I did in fact check that it worked before posting. :slightly_smiling_face:

And I have indeed run into similar scenarios as the one you describe. In fact, I first tried phind (which is normally very good) but it got it wrong this time. I then fed it the correct code and it apologised and then explained back to me how the code that I'd posted worked!

1 Like

Given that the thread has derived to a discussion about how we would perceive if people posted ChatGPT or similar generative IA responses as replies in this forum, here are my thoughts about this. (This is a hypothetical comment, not for this specific thread, which as others have commented, was well introduced and informative.)

I think I would be mildly annoyed, and if it happens in a thread I would just mute it, if it happens repetitively I would likely look for somewhere else to find real insights about rust.

Machine generated contents are so common and so much a plague on the internet, that I don't find a compelling reason to actively look for it. If I wanted a generative IA's take on something, I would just ask myself.


Of those two examples, the single match statement seems simpler and easier to understand (to me). That's a pretty decent example of a conditional match arm, too.

As for if-else vs. match, that's at least partially a stylistic question. I don't think there's a big substantive difference in this case. I'll admit to having a lot more experience with C, so I tend to think in terms of if-else instead of match (except where match is replacing a simple case statement in C). I probably would have written this with if-else simply because I'm more used to it. But I like the single match statement with conditional match arm.

Might be good to move the "in what ways is ChatGPT allowed here" discussion into its own thread?

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.