Control MCO in RCC.CFGR after freeze()

I'm new to Rust, but done a fair amount of embedded C development. I've been using a Bluepill board to learn embedded rust, and one of the things I wanted to do was use the MCO (Micro-controller clock output) pin to verify some of the clocks on the board. While I was able to arrive at something that works, my solution seems quite awkward and I wanted to see if there was a better one.

On the STM32, the bits which control the MCO feature are in the CFGR register of the RCC module. As far as I can tell, I must freeze this register to set the clocks. Once it is "frozen" no further modifications to it can be made, even to MCO which has no influence on the clock rate. Is this really the case, or is there a way around this limitation? It seems unfortunate, since it prevents setting up the clocks and testing more than one without re-flashing the part.

The parts of my main.rs dealing with RCC look like this:

let dp = pac:: Peripherals::take().unwrap();

dp.RCC.cfgr.modify( |_r, w| { w.mco().variant(pac::rcc:cfgr::MCO_A::SYSCLK } )

let mut rcc = dp.RCC.constrain();

let _clocks = rcc.cfgr.sysclk(36.mhz()).freeze(&mut flash.acr);

I get the feeling I'm doing this "wrong". At the very least, is there a way to combine the register write to the MCO in the builder that I missed? A way to access the SYSCLK variant without the deep path?

Finally,(and hopefully this doesn't completely overload this post) but as a secondary question: can anyone explain or point me to a description of what the constrain() method is for? I didn't see it anywhere in the embedded book, and the rust-docs on it are pretty opaque to me, but it's use seems pretty widespread.

My understanding is that constrain() and freeze() consume the registers to prevent further modification. It is designed this way for safety reasons. I found this blog post helpful to understand the philosophy behind it.

Thanks, that blog post is helpful, but also still confusing to me.

For instance in the section " Freezing the clock configuration", the article text describes using constrain() to get parts, but then the example uses split() to actually take ownership of rcc from the peripherals. The following section "Typed Configuration" then uses constrain() to get rcc.

I think some things that would really help me would be:

  1. An explanation of what the distinctions between split, take, and constrain are. When would you use one or the other and why?
  2. A primer on how to navigate the docs. The explanations I find when I try to read up on what constrain() does are too thin to help me understand what's going on.
  3. When I search rcc in the docs I get this. Based on the blog post, I think I understand that there would be two layers here one to represent the svd2rust generated stuff, and another to represent the higher level embedded-hal stuff. However, I see at least four stm32f1xx::rcc, ::device::rcc, ::pack::rcc, ::stm32::rcc. How do I know which is which and when and how to use each? The docs seem almost identical for each one.

Finally, I think I do understand and appreciate the philosophy around using typestates to try and preclude certain error conditions, and how powerful this is in most situations. However, I can also think back to several instances I have certainly required more flexibility than just a straight forward static initialization at startup. For instance, having different peripherals connected to the same pins, or changing pin modes at different points in the application's life cycle or disabling peripherals for reduced current consumption, or putting out the system clock through the hardware support for this in RCC. Does this model permit the flexibility to do that? For instance, if I don't ever Drop() a GPIO can I reconfigure it as an input/output/peripheral later in the program? Are there ways to use unsafe to modify the parts of rcc.cfgr that control the mco pin after I've frozen it?

Sorry for this long winded question, obviously I'm a bit lost and a few pointers in the right direction would be a great help!

Edit:
I think one of the missing pieces for me is the svd2rust docs. Reading those (which is largely a re-hash of the blog post) is probably actually more in line what I needed to see to understand how to use peripherals.

I believe they are the same. The crate uses use keyword to bind a name to a full path.

I think you can modify the register after calling constrain() if you want to.

let cfgr = unsafe { &(*pac::RCC::ptr()).cfgr };
cfgr.modify(|_, w| { w.mco().variant(pac::rcc::cfgr::MCO_A::SYSCLK) });

However, this uses unsafe. It is probably better to add a method to modify MCO to the hal crate. It looks like stm32f1xx-hal exposes a function to modify APB1ENR. Maybe you can do something like that for MCO?

Hey thanks for the replies. I've been working with this some more today and some of it is starting to actually sink in and work, so thanks for your help so far.

Do you have any insight into why the same crate is pub use exported so many times inside the hal? I assume there is a motivating reason, but I'm too new to figure it out from the docs or trying to read the source.

The ptr() bypass to the singletons in the svd2rust crate was definitely a missing piece for me. I have no particular plans to use it, but it's important (to me at least) to know that escape hatch is there.

I agree, there is probably a way to write a function in the hal to modify MCO without breaking the frozen clock contract, which would expose the MCO hardware functionality in a safe way while still allowing the clocks to be locked. It's not that I'm not willing to do that, but I'm pretty sure I'm not the right one to do it just yet... Maybe in a few months I'll be able to put a PR together for something like that :slight_smile:.

That said, I think the function you linked is a little different from what the MCO function would be. Correct me if I'm wrong, but AFAICT, the freeze method only locks the CFGR register in RCC. Since APB1ENR wouldn't typically be "frozen" anyway, the function that modifies it is a little simpler since it doesn't have to bypass the "singleton" features of the svd2rust bindings, right?

I am pretty sure the reason for pac is this new recommendation from rust-embedded group.

HALs reexport their register access crate (C-REEXPORT-PAC)

HALs can be written on top of svd2rust-generated PACs, or on top of other crates that provide raw register access. HALs should always reexport the register access crate they are based on in their crate root.

A PAC should be reexported under the name pac , regardless of the actual name of the crate, as the name of the HAL should already make it clear what PAC is being accessed.

I’m sorry if I confused you. I only wanted to say you could expose a function that modifies the register without messing up the clock configuration.

My understanding is that freeze() doesn’t really “freeze” anything. It just consumes the struct that can access the registers. If we use ptr(), we can always access them and overwrite.

I was imagining adding a function like this to the hal crate.

impl Clocks {
    // ... 
    pub fn set_mco(&mut self, mco: cfgr::MCO_A) {
        let cfgr = unsafe { &(*RCC::ptr()).cfgr };
        cfgr.modify(|_, w| { w.mco().variant(mco) });
    }
}

We can then do this.

let mut clocks = rcc.cfgr.sysclk(16.mhz()).freeze(&mut flash.acr);
clocks.set_mco(pac::rcc::cfgr::MCO_A::SYSCLK);

I am not saying this is the correct way. Although it looks safe enough to me, I could be terribly wrong. I am just a 6 months old Rustacean trying to learn by participating in this forum.

No problem, you didn't confuse me at all that is my understanding as well. I was simply pointing out that the APB1ENR register example you used is a little different in that it doesn't need to resort to unsafe because the reference (maybe reference is an imprecise word? instance?) that refers to that register is never dropped. MCO is different because it is a field of bits embedded in the larger CFGR register which is dropped by freeze. I agree with your solution.

In your time learning about this, have you developed an understanding of what distinguishes constrain from split? They both seem to function very similarly to me - consume a part of the peripherals singleton and return it to be owned by the caller and is still a bit confusing to me.

I've got to admit, as a maintainer of the crate I'm not even 100% sure. I would say in general, that constrain takes a set of registers, and wraps them behind some abstraction to make them safe.

split takes a chunk of registers and divides them into smaller parts. The prime example is the gpio split function which takes gpiox and splits it into structs representing each individual pin.

As far as I know, take is only present on the pac and gives a unique owned object containing all the registers. If take is called multiple times, an error is thrown, since the whole abstraction relies on there only being one copy of each register around.

Yes, the docs are pretty lacking in a lot of places. I'm aware of some, and not others. I would really apreciate a heads up whenever the docs are lacking. For example, in this issue: https://github.com/stm32-rs/stm32f1xx-hal/issues/158 (which I just realised has 0 out of 10 items completed :sweat_smile:)

When I search rcc in the docs I get this . Based on the blog post, I think I understand that there would be two layers here one to represent the svd2rust generated stuff, and another to represent the higher level embedded-hal stuff. However, I see at least four stm32f1xx::rcc , ::device::rcc , ::pack::rcc , ::stm32::rcc . How do I know which is which and when and how to use each? The docs seem almost identical for each one.

Oh yea, that's annoying. Ideally, the stm32f1xx_hal::rcc::RCC is the one that should be shown. The 4 others are from various names of the pac crate. These are caused by this issue https://github.com/stm32-rs/stm32f1xx-hal/issues/16 in which we decided that rather than deciding on one of pac stm32 and device, we'd export it as all three. I can see this causing confusion and I think we should deprecate stm32 and device.

However, I can also think back to several instances I have certainly required more flexibility than just a straight forward static initialization at startup. For instance, having different peripherals connected to the same pins, or changing pin modes at different points in the application's life cycle or disabling peripherals for reduced current consumption, or putting out the system clock through the hardware support for this in RCC .

This is indeed a limitation. I guess we have to sacrifice some generality to get the safety that rust achieves, but rust can also be pretty good at giving you both safety and other things. Turning peripherals off can certainly be achieved, the ADC for example has a power_down function (or at least I thought it did, maybe that was in my fork and has never been upstreamed).

As for GPIO, it's a known issue see for example the most angry issue I've ever received. A partial solution is in the works https://github.com/stm32-rs/stm32f1xx-hal/pull/211

I agree, there is probably a way to write a function in the hal to modify MCO without breaking the frozen clock contract, which would expose the MCO hardware functionality in a safe way while still allowing the clocks to be locked. It's not that I'm not willing to do that, but I'm pretty sure I'm not the right one to do it just yet... Maybe in a few months I'll be able to put a PR together for something like that :slight_smile:.

I don't recall what the MCO hardware does, but the guarantee given by freeze is just that the clocks won't change. I assume mco does not assume that, so it should probably be exposed. If you would like to implement that, I'll gladly give you some pointers for where to start :slight_smile:

Hope that lengthy message answers some of your questions :slight_smile:

2 Likes

First of all, I want to say thanks for the reply and for the work on the Rust embedded ecosystem.

This is probably more of a help than you think. At least it confirms I'm not the only one and that there isn't some deeper meaning I'm completely glossing over. I'm obviously much to new to this ecosystem to be slinging API recommendations around, so I'll stop short of that and just say that probably indicates this could be a point of improvement.

Thanks for clarification on the multiple imports. Not ideal but knowing the history helps me know what to "ignore" in the docs as redundant. +1 for deprecating any that have proved to be redundant I guess :slightly_smiling_face:.

I can understand some loss in flexibility to get better safety guarantees, but I don't think re-configuration of peripherals is fundamentally unsafe. To me, it seems that one of the majorly inspired ideas in the embedded-hal and svd2rust is that the rules for safe use of memory are similar to the rules for safe use of peripherals - so just use the ownership system for both. A major place where this seems to break down for me (and again, this could easily be down to my lack of understanding) is that with memory drop doesn't really mean destroy, it means "return to the system". Therefore it seems, that in the embedded-hal there should be an analagous way to de-init a peripheral and get the base peripheral back. In fact, like you point out such a thing exists for the adc, and a few other peripherals. If release was implemented for GPIO (and probably universally), wouldn't that allow users to release an output pin, reconfigure it as an input pin and vice-versa as needed and with the same borrow-checker enabled safety? Again, I'm totally aware that I'm too new at this to see the practical issues, but as a newbie it seems like this could be done safely using the ownership system. On the slightly more extreme end, I wonder if it could also work on a frozen clock config, at the cost of some runtime reference counting. Safely changing the clock would then mean, release all peripherals which needed clocks so they drop their reference to it, then call some method to get Rcc back from clocks and re-initialize everything at a new clock rate.

Other than feeling slightly insecure that some part of your brain had associated my questions with an issue where the code of conduct had to be linked (I hope I'm not coming across like that!), reading through that issue was actually very helpful for me. In particular it highlighted to me why generics are so broadly used, which is something as a C programmer I could certainly spend some time buffing up on.

The MCO is "microcontroller clock output" and just routes out a selectable internal clock (SYSCLK, LSE, HSE, etc, etc) to the MCO pin (PA8 on the STM32F1). When making your own board, I find this feature very helpful for confirming that clocks are working correctly, which is important because these are circuits which can be highly layout sensitive. The high impedance of the crystal oscillators used for these things cannot be directly probed with an oscilloscope, at least without very specialized low capacitance probes (even many active probes are too much loading). Verifying crystal operation through the MCO is better and more direct anyway since it confirms all the pre-scaling and everything is correct. We can totally do this and hold the freeze guarantee, and I am interested in implementing it, but I don't feel comfortable doing so yet. Give me a week or two to kind of feel my way around a bit more, and I'll absolutely take you up on your offer for some pointers on how to do it right :slightly_smiling_face:!

That's certainly not the way I meant it, it's just that the actual issue hidden behind all the angry words in that issue are similar to some of the ones you're describing. (I also kind of love sharing that issue because of how ridiculous it is :P)

For sure, and in general, most peripherals should allow reconfiguration. Most of them implement a release function which turns them off and returns the original registers.

Yep, and not having a release already is kind of an oversight (allthough we have all the into functions that do the same thing. The problem is that as a user, especially one writing a driver, there is no general way to release and reconfigure a pin without knowledge of the specific hal implementation.

That might work actually. I don't think the reference counting would even be a performance issue, since you very sledomly release or acquire peripherals.

Ah right, that shouldn't cause any issues with the clocks as you said :slight_smile:

Hey, thanks for all the input.

Is there an alternate name that some peripherals use for this feature? It seems like only ADC, Serial, QEI and Timer actually implement it.

I see. Do you think release() should be part of the higher level embedded-hal interface crate?

I still don't think I'm ready to write pull requests just yet, but how would you feel if I opened some GitHub issues to start tracking some of these things, and maybe collect more input? In particular I'm thinking of issues for:

  • safe MCO configuration after freeze
  • implement release for other modules (traits? still figuring out the right names for things :slightly_smiling_face:)
  • add reference counting to clocks so that they can be safely "unfrozen" and modified
  • disable_jtag seems too coarse. In particular SWO on pb3 is still usable with SWD debugging, but if you want to use any JTAG pin you disable_jtag will free SWO.

Please do :slight_smile: Suggestions for improvements are always welcome!

Off the top of my head, I don't think it has another common name, so it's probably missing. Should be fairly easy to add though

I think this would be difficult. Release itself would be fine, but how would the opposite work. Each hal has a different API for initialisation, and takes a different set of inputs. So writing generic code for an initialise thing would be very difficult.

If we go down the embedded hal path then it would be a trait, otherwise just a normal function. A trait is basically an interface with some extra niceness in traditional OOP.

I use SWD with disable jtag all the time (I think, or maybe i'm getting the different debugging methods mixed up. Either way, open an issue so we can look into it more closely and won't forget about the problem :slight_smile:

Hmm, I see. In C, I would probably opt to take setup parameters that are functions for the basic operations I need. Things like set_data_pin_input and set_data_pin_output, so the user can provide a minimal amount of behavior that adapts the driver to the hardware at hand. Not sure if that fits or not with the embedded-hal approach.

Yes, absolutely, SWO is extra tracing functionality from the Cortex-M, and is not necessary for basic SWD. However, in lots of circumstances it is a nice feature to use alongside SWD to get extra tracing data out. I'll open an issue :slightly_smiling_face:

Right, something like that might work, though I'm not 100% sure how well that would work with ownership etc.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.