Is there actually a semantic difference between `FromStr` and `TryFrom<&str>`?

I'm trying to figure out if libraries should implement both FromStr and TryFrom<&str> for types that can be created from strings. For now I think they should and don't think there is a semantic difference between them. However there's a blogpost which claims there is and I found people both agreeing and disagreeing. What does wider Rust community think?

My arguments:

  • FromStr was created before TryFrom, which is an indication that TryFrom is superior because it's more general and flexible
  • I remember discussions about impl<'a, T: FromStr> TryFrom<&'a str> for T and deprecating FromStr but it didn't happen because it would conflict with impl<U, T: Into<U>> TryFrom<T> for U which was more important.
  • Including also TryFrom<String> can avoid allocations in some cases and it makes sense to also have TryFrom<&str>
  • TryFrom<&str> allows borrowing the input which may be useful for types containing Cow<'_, str>
  • std documentation doesn't say anything about it but it does mention semantic differences of other traits that are similar to each-other (e.g. AsRef vs Borrow, io::Write vs fmt::Write); could be just documentation issue though
5 Likes

There are both technical and semantic differences.

Semantically, FromStr is for parsing a string. Ie., when it has some structure that should be discovered and some data should be extracted from it.

In contrast, TryFrom is for performing a conversion. Ie., when the target type is semantically the same "kind" of type. So TryFrom<&str> would be appropriate to implement for string-like types, but not for arbitary types parseable from a string.

Technically, FromStr::from_str() takes a string reference with an implicit, elided lifetime parameter. This means that it's decoupled from the return type (Self in particular), so it's not possible for eg. impl FromStr for &SomeBorrowedType. In contrast, this would be possible with impl<'a> TryFrom<&'a str> for &'a SomeBorrowedType.

8 Likes

That reiterates the claim from the blog post but doesn't explain why there should be semantic difference.

2 Likes

I pretty much agree with Sabrina's short summary, and I think most of your bullet points actually highlight the semantic difference too (explored below). That said, I don't see any inherent harm and maybe some benefit to implementing both when it makes sense (e.g. no performance traps). FromStr is more idiomatic if you're actually parsing and not consuming a &str or String, though, so you can't count on others having implemented TryFrom (or on the implementatons being identical when they have).

This implies they're in competition or something, so only makes sense in the context of "it's sensible to implement both but I will only implement one". Being newer and more generic isn't a slam-dunk Better Trait :trophy: guarantee in my book. The more concise trait communicates intent, well, more concisely, can be more inference-friendly, and is sometimes more idiomatic.

If it's not sensible to implement both (e.g. because TryFrom is more flexible), implement the one that makes sense.

If it's sensible to implement both and you do, which is "superior" is moot -- apparently neither was superior for this purpose.

If you can do this, you consist of String in some sense, and thus it makes semantic sense to implement TryFrom<String>. TryFrom<&str> and FromStr are then for consumer convenience only, unless you can e.g. avoid allocating when given a &str in some significant amount of cases.

This is basically the dual of the last point.

If you can do this you consist of &'a str in some sense and thus it makes sense to implement TryFrom<&'a str>. FromStr probably doesn't make sense unless you can cheaply [1] discard the borrowed data and return a 'static version of yourself in some significant amount of cases.


  1. e.g. without allocating ↩ī¸Ž

1 Like

I mean, if TryFrom existed before FromStr do you think it's likely someone would have added FromStr later? I personally don't. At best it'd be a trait alias for TryFrom<&str>.

Also there's a case when TryFrom<String> is faster even when Self doesn't contain String: when you want to return the input string in the error as context. Yeah, it only optimizes error path but still it does optimize something...

You read the doc as also can't count on FromStr doing what you want unless you've read the doc (or code). And if anyone implements them inconsistently their code is suspicious to say politely. Rust has a shitload of implied trait properties - e.g. everyone kinda expects x == x.clone() but it's not a documented property of either PartialEq not Clone. (Yeah, I was thinking about writing these up somewhere.)

Anyway good point about there not being harm having all of them implemented.

Point of comparison: i32 is FromStr but not TryFrom<&str>.

Not under that name, probably, but quite possibly under another name such as Parse.

Note that FromStr doesn't really exist to call <T as FromStr>::from_str; it exists to provide str::parse. And as part of that, it implies that a FromStr type has some canonical stringified representation to parse from.

For better or worse, the example set by std is that errors shouldn't contain information borrowed from the caller (e.g. fs operations' errors don't know what path failed) and the caller should attach that context to the error if it's desired. For a parsing error, that means just having positions and calling e.g. a with_source_code method to "hydrate" the error type for better display.

Is not having to remember to do something like fs::open(&path).with_context(|| format!("trying to open {path}")) to get useful errors better API ergonomics? Absolutely. But std errs on the side of biasing for the success case here, and doing the same in libraries avoids this ownership trouble. (E.g. consider the case where you will propagate the error with the owned context, but in the success case you still want to have ownership of the context to continue using it; this basically requires the hydration approach.)

2 Likes

It's not that there "should" be; it's that there is. Parsing is not the same as a mere conversion.

2 Likes

That's probably true but it doesn't mean they're always semantically the same (technically or conventionally).

I don't care enough to spend a lot more time trying to convince others there is a semantic difference (or have others try to change my viewpoint), but hopefully I've summed up why I feel there is.

One could do that, and in that case also having FromStr / TryFrom<&str> would provide a performance tradeoff to choose from I suppose. I don't think I've ever really seen this pattern in the wild though, where the input String is not consumed but may be returned in the error case (presumably unchanged). Instead you take the more general &str and not String; in the error case there's no reason to pass back the input since &str is Copy (and the caller can always use the String they own or create one from their &str if they need one to bubble up or whatever).

So there's no optimization hit in the "I own a String" case and no optimization hit (of converting &str to String in the error path) in the "I just have a &str" case.

I suppose there could be an ergonomic hit in some circumstances.

This pattern does appear in OsString::into_string(), and I've always considered it unusual. Personally, I prefer error types that implement the Error trait.

It's not that pattern because it does consume the OsString (the output has the same representation when the conversion is possible).

(I guess "consume" is vague. I meant "the resulting type is in some sense storing the input type".)

That makes sense. I took "consume" to mean "move". There is some subtlety here.

1 Like

I consider FromStr to be the dual of Display, and to be essentially unrelated to TryFrom/Into.

Which might just move the problem -- now the question is whether you should implement Display -- but maybe that'll help make it easier to resolve.

11 Likes

I think simply because nobody implemented it.

This is exactly what several people (including myself) complained about. It's annoying to the point that I prefer .unwrap() to ? for error handling for all applications that are not intended for less-technical general public.

I actually implement both TryFrom<&str> and TryFrom<String> in my crates so that those who need to use the string afterwards just pass &str (which allocates in the error case) and those who don't pass String. This is the most flexible for consumers.

But I think the real question is: Is there any actual harm in implementing TryFrom<&str> and TryFrom<String> in addition to FromStr in a public library? Are there some bugs being prevented by not providing the impl? If I were to make a PR against alloc/core to add those impls for integers would it be rejected for any reason other than "this adds more code that we don't want to maintain"?

Being more general and flexible doesn't mean it's better. On the contrary, it's often a liability, unless you actually intend to use that flexibility. More moving parts is, by default, always an issue and something to avoid. This statement is basically the core of your argument, so imho since it isn't true, the rest collapses as well.

Saying "TryFrom is more general so it's better" is like saying "a tuple is more general than a named struct with named fields, so you should always use tuples". Or saying "String is more general than PathBuf so it's better". Or "instead of taking T: Iterator you should take a closure F: FnMut() -> Option<Item>". Less flexibility, and thus fewer possibilities for errors, is exactly why we prefer strongly typed objects.

Some specific ways in which extra flexibility hurts:

  • You can use s.parse::<T>() to easily parse a string as T. You can't do the same with s.try_into(), because try_into doesn't support turbofish.

  • There is a blanket impl<A: From<B>> TryFrom<B> for A. This prevents you from adding a TryFrom impl if there is a From impl, but doesn't prevent impl FromStr for A.

  • Even if there is no From<&str> impl now, you should always consider the possibility that you would need to add it later. If you do, all TryFrom<&str> impls become invalid, so now significant changes to your and downstream code are required.

A canonical example where the From<&str> and FromStr impls are entirely different is serde-json. A JSON Value can contain a String, so there is impl From<&str> for Value which just implicitly allocates a String and wraps it into a Value::String. However, there is a separate impl FromStr for Value which parses a JSON object.


Put another way, would you say that any fn (A) -> B should be made into an impl From<A> for B, or vice versa? Would you say that instead of Default one should impl From<()> for T? Or impl From<&T> for T instead of Clone? The difference with FromStr is basically the same.

The heuristic for using TryFrom is situations where you would naturally consider implementing From, but the conversion cannot be made infallible.

8 Likes

I wouldn't say so, it's just one part of it.

That's just a strawman. My point was that that if we had TryFrom from the beginning we wouldn't need FromStr.

That's double invalid argument because I suggested implementing both and also because in hypothetical universe where TryFrom came before FromStr the parse method would require Self: TryInto<T> instead.

That's quite illogical. Either all strings are convertible which implies From and no need for FromStr (or just have it with Err = Infallible like String does.) or some strings aren't and you must not add From then since it'd have to panic.

Impossible because of the above.

That's a great example and I almost got convinced. Except now that I think about it that API is quite foot-gunny since it's not clear what's going on. This is ironically an instance of "more flexible but worse". I think the constructors should've been explicit. Like from_json_encoded(&str) and string_from_unencoded or something similar.

What you seem to be conveying here is that From (and thus also TryFrom) is intended for obvious conversions. And I agree with that, non-obvious conversions shouldn't have those traits implemented at all to avoid confusion and should have explicit constructors instead. What I think is all types that have FromStr have an obvious TryFrom<{stringly}> which works the same. serde_json is just breaking this making more error-prone API.

I don't see anything strawman about it. The question is: should you introduce special-case functionality, even if you can in principle express the same with generic tools? In my opinion, the answer is obviously "yes, you should, if there is some existing or potential difference in user expectations". FromStr is just a single instance of the strong typing philosophy, which is itself supported by all other examples.

A converse philosophy of "don't add any features if the general constructs are sufficient" is implemented in C++, which doesn't even have native tuple or variant types. Few people enjoy it.

Also remember that FromStr is almost never used explicitly. It exist only to power the str::parse method. So the question is "should str::parse rely on generic conversion or explicit nominative typing". Thing is, it's not true that there is a single reason to be convertible from a string, and it always means parsing. There may be all kinds of reasons, some of them semantically different even if they are syntactically similar.

You can't enforce anything about the implementations of TryFrom, since it's so general. TryFrom<&str> may even be implemented as a special case of some blanket impl, without the user fully understanding the consequences. For example, if the overlapping impl rules didn't forbid it, we could perhaps have conversions

impl<A, B, C> TryFrom<A> for C where B: TryFrom<A>, C: TryFrom<B>

in which case you could have T: TryFrom<&str> just because there is some intermediate conversion type.

Or consider impl From<String> for Vec<u8> and impl From<&str> for Vec<u8>, which just return the underlying string's buffer. Would you say that s.parse::<Vec<u8>>() should work and should just return the string's bytes, without actual parsing?

FromStr, on the other hand, is a special-purpose trait with no blanket impls and very specific documented semantics. It won't be implemented accidentally, and if it's implemented, you can rely it to work in a way that you expect. Since it's used basically only for str::parse, the user has certainly considered that they have opted into using str::parse with their type.

5 Likes

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.