Can't Compile to nvptx

I wanted to implement dynamic allocation and release of memory in cuda using core::arch::nvptx in rust.I made the following directory structure:

├── Cargo.lock
├── Cargo.toml
├── nvptx
│   ├── .cargo
│   │   └── config
│   ├── Cargo.lock
│   ├── Cargo.toml
│   └── src
│       └──
└── src

I created a .cargo/config file in the nvptx crate.
In it, I wrote the following

target = "nvptx64-nvidia-cuda".

And here is how the contents of nvptx/src/ looks

#! [feature(stdsimd)]
#! [no_std] #!

use core::arch::nvptx;
use core::ffi::c_void;

unsafe fn cuda_alloc(size: usize) -> *mut c_void {

unsafe fn cuda_free(ptr: *mut c_void) {

Also, the code in src/ is as follows.

use std::time::Duration;
use std::thread::sleep;

use nvptx::*;

fn foo() {
    unsafe {
        let ptr = cuda_alloc(10 as usize);

fn main() {
    println!("Hello, world!");

When I compiled this, I received the following error.

error[E0432]: unresolved import `core::arch::nvptx`
 --> nvptx/src/
  | use core::arch::nvptx
4 | use core::arch::nvptx;
  | ^^^^^^^^^^^^^^^^^ no `nvptx` in `arch`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0432`.
error: could not compile `nvptx`.

To learn more, run the command again with --verbose.

Rust version: (rustup show)

Default host: x86_64-unknown-linux-gnu

installed toolchains

nightly-x86_64-unknown-linux-gnu (default)

installed targets for active toolchain


active toolchain

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.54.0-nightly (f94942d84 2021-05-19)

The .cargo directory will only be picked up if you run Cargo from inside the nvptx directory. Try moving it to the top level.

Thank you for your answer.
When compiling rus for nvptx, there is a problem if you put .cargo at the top level, because you cannot use std. Therefore, I cut out the crate that compiles to nvptx.

Also, .cargo/config is present in accel-core of the crate called accel as well. I made a copy of this.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.