How do we describe a type that accepts number literals as its construction, like f32
, i32
, usize
.etc. Is there a correct way of doing this?
Concretely, is there a way of filling the question mark below?
fn f<T>()->T where T:? {
0
}
How do we describe a type that accepts number literals as its construction, like f32
, i32
, usize
.etc. Is there a correct way of doing this?
Concretely, is there a way of filling the question mark below?
fn f<T>()->T where T:? {
0
}
There isn't one.
You can use Default
which gives you zero for at least the built-in numeric types.
You can use Zero
from the num
crate, which is defined as constructing the additive identity.
You can use TryFrom<T>
to construct numeric values from some other type. For example, you could use TryFrom<u8>
to construct values in 0..256
, or TryFrom<i8>
to construct from small negative values.
Or if you need some other specific value, you can define and implement your own trait.
But constructing values from arbitrary number literals? No can do.
An attempt to this problem.
#![feature(const_trait_impl)]
#![feature(inline_const)]
#[const_trait]
pub trait FromLiteral {
fn from_literal(literal: &'static str) -> Self;
}
macro_rules! literal {
($l:literal) => {{
const fn const_wrapper<T>() -> T
where
T: ~const FromLiteral,
{
const { FromLiteral::from_literal(stringify!($l)) }
}
const_wrapper()
}};
}
fn f<T>()->T where T:FromLiteral{
literal!(0)
}
Some user code using these items:
impl const FromLiteral for usize {
fn from_literal(literal: &str) -> Self {
let bytes = literal.as_bytes();
let length = bytes.len();
let mut result = 0;
let mut index = 0usize;
loop {
if index < length {
match bytes[index] {
byte @ b'0'..=b'9' => {
result *= 10;
result += (byte - b'0') as Self;
}
_ => panic!("invalid digit character"),
};
index += 1usize
} else {
break result;
}
}
}
}
impl const FromLiteral for i32 {
fn from_literal(literal: &'static str) -> Self {
// logically wrong, just test code
let bytes = literal.as_bytes();
let length = bytes.len();
let mut result = 0;
let mut index = 0usize;
loop {
if index < length {
match bytes[index] {
byte @ b'0'..=b'9' => {
result *= 10;
result += (byte - b'0') as Self;
}
_ => panic!("invalid digit character"),
};
index += 1usize
} else {
break result;
}
}
}
}
pub fn is_prime<NaturalNumber>(n: &NaturalNumber) -> bool
where
NaturalNumber: FromLiteral
+ std::ops::Add<Output = NaturalNumber>
+ std::ops::Mul<Output = NaturalNumber>
+ std::ops::Rem<Output = NaturalNumber>
+ Ord
+ Copy,
{
let mut factor: NaturalNumber = literal!(2);
loop {
if *n % factor == literal!(0) {
break *n == factor;
} else if factor * factor > *n {
break true;
} else {
factor = factor + literal!(1)
}
}
}
pub fn smallest_prime_larger_than<NaturalNumber>(n: &NaturalNumber) -> NaturalNumber
where
NaturalNumber: FromLiteral
+ std::ops::Add<Output = NaturalNumber>
+ std::ops::Mul<Output = NaturalNumber>
+ std::ops::Rem<Output = NaturalNumber>
+ Ord
+ Copy,
{
let mut i = *n + literal!(1);
loop {
match is_prime(&i) {
true => break i,
false => i = i + literal!(1),
}
}
}
fn main() {
(2..=30usize).for_each(|index| println!("{index} {}", smallest_prime_larger_than(&index)));
(2..=30i32).for_each(|index| println!("{index} {}", smallest_prime_larger_than(&index)));
}
Some things to consider:
This blows up the expanded code linearly along with the number of usage of literal!
macro and increase compile-time for sure.
I assume this will deal no damage to the runtime performance as all literals are const
expressions.
This so far requires rewriting the logic of parsing from literals. I am not sure if this could be improved by writing it into procedural macro and if there is API from the compiler that we can use to parse the literals to built-in types in an official way. The main problem I see is const
really restricts the set of tools at hand, which makes crates like rustc_lexer
not available.
I'm still not sold on the approach. The biggest problem I can think of is that this is all well and good for getting 0
, but what about having fn g<T: FromLiteral>() -> T { literal!(256) }
?
That will type check, but fail to actually compile. That could be really frustrating if we're talking about a data structure that takes a T
, and type checks, but blows up when you try to use some method that requires a literal that's out of bounds for T
. You don't get a type check error, you get a compile time panic.
It makes me think of C++ template errors, and I don't think anyone wants Rust to head in that direction.
Personally, if I need anything other than zero, I'll usually just go with either u8: Into<T>
or a custom trait.
But maybe I'm just being picky.
Unrelated note: this is giving me flashbacks to CTFE programming in D. And not the fun kind of flashbacks... and I say that as someone who implemented printf and an INI parser in CTFE.
That will type check, but fail to actually compile.
That's new for me. I have always believed a pseudo-equivalence between compiling and type-checking + optimizing.
Indeed, I find my approach's error message, and the fact that it does not warn me in underlying squirly line when I passed in invalid literals, to be frustrating. The route I thought to go off is to use procedural macro, where I know I could provide a better error message, and tell IDE to draw the red line before I actually compile it.
The difficulty there is that type information is not available at the time of macro processing, whereas it (the type information) is supposed to direct the program which parsing logic to use. Unfortunately, to achieve typechecking errors, it seems parsing logic can only exist during the time of macro processing. I am deciding that to make an as-beautiful-as-we-would-like verson of FromLiteral
is not feasible in current Rust.
Why are you so sure? I, for one, would love the ability to have easy-to-use TMP.
All modern languages offer such ability, even Rust. Only in Rust you have to use macros for that.
I haven't done CTFE in D, but I've done a lot of things with templates in C++17 and they are clearly easier to use than Rust's generics.
It may not be the best choice for public API but for something where all instantiations are made by the same person who is writing templates it's much better than what Rust is offering: it's much easier to write such code and error messages, while annoying are no less cryptic that what you can from problems in macro-expansion.
Public API where such things can be instantiated with arbitrary type, though… in that case investing in the proper constraints is good idea, but when number of types you want to support is limited… my flashbacks of using C++17 for that are, mostly, good.
For 0
specifically, there's a secret incantation for the additive identity in the standard library: std::iter::empty::<T>().sum::<T>()
, depending on T: Sum
.
(But use num_traits::Zero
. It's way clearer.)