 # Handling small fixed Rc cycles (size 2)

I know that in general handling Rc cycles is horrible, but I have a small special case I hoped might be solvable - - my cycles are always size 2, and each object can only be in one cycle.

I have a mathematical object with a method "inverse()" which for each value X returns the inverse of X. This is fairly expensive, so I keep I put it in an Rc and both return it and keep a clone inside X in case it is wanted again in future.

However, the inverse of the inverse of X is X, so I'd like to tell the new inverse object that its inverse is X (which will require X always being in an Rc, but I'm fine with that).

Therefore, during drop I could check if (a) there is only one strong reference left to X and (b) I have a stored inverse and (c) the stored inverse also only has one reference. If all these things are true, I have a tiny cycle and I should break the cycle so the two Rcs will free themselves.

My worry is if in future I upgrade to Arcs and start doing multitbreading, this kind of counting could break horribly. I don't suppose this kind of small fixed cycle has already been done well, by someone else?

Your description of the problem is horribly confusing but I think I was contemplating a similar issue recently.

We have some object/value that we can perform some function on to get some other object/value:

x = something
y = some_function(x)

We have an inverse function to get back to where we started:

z = some_function_inverse(y)

At which point we hope to have z == x.

Now, essentially x and y are the same thing. Just different representations of it. If you have one you can get the other and vice versa.

So why, I thought, not define a struct that can hold that thing in both representations?

You can the create the struct with some method giving the value as a parameter. Or you can create the same type struct, with some other method, giving the inverse value as a parameter.

Then you have two accessor functions that can get the value or the inverse out.

The question then is, when does the calculation happen?

1. Calculate the inverse on demand when it is requested.
2. Calculate the inverse when the value is set initially and cache it in the struct.
3. Calculate the value on demand when it is requested.
4. Calculate the value when the inverse is set.

Eventually the struct could hold both the value and the inverse at the same time. No need to calculate anymore.

Sorry, I think I am partly unsure exactly what I want. This is a different way of solving the problem, putting them both in the same struct instead of having two struct connected in a cycle

Perhaps some sort of enum to hold both states might be worthwhile, like:

``````enum Invertable {
Base(Rc<Item>),
Inverted { current: Rc<Item>, inverse: Rc<Item> }
)

impl Invertible {
fn value(&self) -> &Item {
match self {
Base(item) => item.as_ref(),
Inverted{current, ..} => current.as_ref(),
}
}

fn invert(&self) -> Self {
match self {
Base(item) => Invertible::Inverted {
current: Rc::new(expensive_op(&item)),
inverse: item.clone(),
Inverted { current, inverse } => Inverted::Inverted {
current: inverse.clone(),
inverse: current.clone(),
}
}
}
``````

(Just winging it, not sure if this compiles as-is).

Neither value will ever be dropped while this reference cycle exists, so checking in `drop` won't work. (`drop` will run only after the cycle is broken.)

You can also just use `rc::Weak` to keep your values, and upgrade them only for a while to return inverted value.

4 Likes

If your object is `Clone`, why not a hashmap from any given item to its inverse? You could insert both directions of the mapping at once.

If you bring threading into the mix, you could try locks on the hashmap or just have a separate hashmap per thread, which would at most perform any given inversion n/2 times, where n is the number of threads and it’s halved because computing one inversion gives you that inversion’s inversion for free. If you can somehow optimize the workload so no elements of any pair ever appear on more than one thread, then you’d have just some minor allocation overhead for the hashmap per thread.

Whether multithreading or not, you could store the hashmap in a separate place and pass it into the inversion function along with the invertible object. This is in my opinion cleaner than having the objects aware of their own inverses, but then again I don’t fully know your use case. Alternatively, you could bring `Rc` or `Arc` back into the mix by having the invertible objects themselves store references to the hashmap and have a similar API to what you are after now. That might get weird with lifetimes but it won’t require any cycles if you use `Option` fields and do the cloning dance well enough. I leave that to you to work out.

I think is the closest to my problem. I'm going to go and think a bit. Thanks

Try this:

``````use once_cell::sync::OnceCell;
use std::sync::Arc;

struct InvertibleValue {
shared: Arc<Shared>,
inverse: bool,
}

struct Shared {
val: i32,
inv: OnceCell<i32>,
}

impl Shared {
fn get_inv(&self) -> &i32 {
self.inv.get_or_init(|| -self.val)
}
}
impl InvertibleValue {
fn get(&self) -> &i32 {
if self.inverse {
self.shared.get_inv()
} else {
&self.shared.val
}
}
fn invert(&self) -> Self {
Self {
shared: self.shared.clone(),
inverse: !self.inverse,
}
}
}
``````
1 Like

Food for thought, perhaps:

``````use std::rc::Rc;

trait Inverse {
fn inverse(&self) -> Self;
}

#[derive(Debug)]
struct Invertible<T> {
value: Rc<T>,
inverted: Option<Rc<T>>,
}

impl<T: Inverse> Inverse for Invertible<T> {
fn inverse(&self) -> Self {
Invertible {
value: match &self.inverted {
Some(inv) => Rc::clone(inv),
None => Rc::new(self.value.inverse()),
},
inverted: Some(Rc::clone(&self.value)),
}

}
}

impl<T> From<T> for Invertible<T> {
fn from(value: T) -> Self {
Invertible { value: Rc::new(value), inverted: None }
}
}

impl Inverse for f64 {
fn inverse(&self) -> Self {
1.0 / self
}
}

fn main() {
let mut x = Invertible::from(10.0);
println!("{:?}", x);
x = x.inverse();
println!("{:?}", x);
x = x.inverse();
println!("{:?}", x);
}
``````
``````Invertible { value: 10.0, inverted: None }
Invertible { value: 0.1, inverted: Some(10.0) }
Invertible { value: 10.0, inverted: Some(0.1) }
``````

There's no type level distinction between the "normal" object and the "inverted" object because they are represented the same.

In this implementation, if you call `inverse()` on the same `Invertible` twice, you get two different `Invertible`s that refer to the same `inverted` but have different `value`s. To fix this you need interior mutability, or to make `inverse` take `&mut self`. There are other approaches as well. My only motivation for posting this is I thought it was neat.

Inspired by @alice's post, I rewrote mine to use `OnceCell` instead of `Option`:

``````use once_cell::unsync::OnceCell;
use std::rc::Rc;

trait Inverse {
fn inverse(&self) -> Self;
}

#[derive(Debug)]
struct Invertible<T> {
value: Rc<T>,
inverted: OnceCell<Rc<T>>,
}

impl<T> Invertible<T> {
fn get(&self) -> &T {
&*self.value
}
}

impl<T: Inverse> Inverse for Invertible<T> {
fn inverse(&self) -> Self {
Invertible {
value: Rc::clone(self.inverted.get_or_init(|| Rc::new(self.value.inverse()))),
inverted: OnceCell::from(Rc::clone(&self.value)),
}
}
}

impl<T> From<T> for Invertible<T> {
fn from(value: T) -> Self {
Invertible {
value: Rc::new(value),
inverted: OnceCell::new(),
}
}
}

impl Inverse for f64 {
fn inverse(&self) -> Self {
1.0 / self
}
}

fn main() {
let mut x = Invertible::from(10.0);
println!("x   {0:p} {0}", x.get());
x = x.inverse();
println!("x'  {0:p} {0}", x.get());
x = x.inverse();
println!("x'' {0:p} {0}", x.get());
}
``````

The advantage I think this has is that `Invertible::get()` doesn't have to check whether `self` is currently holding an inverse or not. The presence or absence of a calculated inverse is only relevant when you actually want to invert the value. I'm not sure if there are any disadvantages.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.