I just signed on the site, and I read the privacy policy (Privacy policy seems to me to not be GDPR compliant) and the terms of service.
The Terms of Service say that "User contributions made on or after 2020-07-17 are dual-licensed under the MIT and Apache 2.0 licenses unless otherwise stated."
But they also say :"This document is CC-BY-SA. It was last updated May 31, 2013."
Was the licensing change really planned seven years in advance? Or is the update date wrong?
The second possibility is quite problematic, as the TOS also says : "The Rust Foundation reserves the right, at its sole discretion, to modify or replace any part of this Agreement. It is your responsibility to check this Agreement periodically for changes."
As a side note, in the age of ChatGPT and Copilot, the sentence "the Content ... is not machine-...generated," can become very restrictive.
The de facto rules are a bit fluid at the moment, as AI tools are rapidly evolving. As I understand the current situation, the moderators generally make a distinction between:
Discussions explicitly about AI-generated code, which are tolerated within reason, and
AI-generated responses to general questions/topics, which are not allowed
The date seems wrong indeed, thanks for pointing that out, I'll make sure we get that fixed.
About ChatGPT, there were some public discussions over here in this linked thread where I've shared my personal opinion on how the rules could be understood, and that in my opinion, posting clearly marked machine generated content with good intentions should always be moderated leniently anyways; e. g. even in the thread @leudz linked above, it looks like a moderator simply left a message, and nothing more happened.
I also believe, it could maybe make sense to improve the phrasing of the rules on that eventually, but as ChatGPT is still somewhat new, especially as so far posts that actually are considered 'violating' the rules w. r. t. machine generated content are very rare, waiting for some further experience with where exactly we want to draw the line might help as well.
It should however already be clear that machine generated output, or random data, are not automatically forbidden if they're only part of a response, in a way that makes sense, and where they're properly marked (or easy to identify) as such. After all, for example compiler output is machine generated, too. Or if you demonstrate a rust program that generates random numbers, there's obviously nothing wrong with showing some of its output, too.
While of course the value of forwarding a question to ChatGPT and forwarding back the whole answer is so low that that's not acceptable (after all, where would that lead if there starts happening in all out threads?), I've personally found it a useful tool to enhance writing answers, and I've called it out in those cases both to be transparent and to encourage others to use it for similar purposes when writing answers.
For example I've found it useful to generate GraphViz visualizations, which would otherwise have been both too much effort and too hard due to me not being very familiar with its input format.
I've also found it useful just today for writing a longer straightforward relatively boilerplaty code example, whilst also being on a phone, so the would have definitely been to tedious by hand
In all these cases I didn't use it for generating natural language text or explanations though, and I wouldn't be sure yet whether (and if so when) there'd be situation where doing so would be useful to me without feeling somewhat wrong. Also for the code or graphs, of course I'd review (and if necessary reiterate by re-generating and adjusting) the content I've posted. I've also found ChatGPT being useful as a spell-checker and grammar checker that has no problem accepting replies verbatim including code and markdown, so I can proof-read my replies a little less rigorously w.r.t. typos, again a time saver.
On that note, I can imagine it to be useful with natural language writing, too, for people not so proficient with the English language. ChatGPT, especially for translating into English, can be a very capable translator; or it could give advice how to formulate things better, etc. In the case of using it as a translator or asking it for writing tips/feedback and incorporating those, it wouldn't even be necessary (or useful) to claim/mark at all that ChatGPT was involved, similar to how people typically won't claim that Google Translate was used, or a word was looked up in a dictionary, while writing.
The date is updated now. (There are no further changes besides that.[1]) And we should make sure not to forget updating it for any future changes, too.