How to synchronize access on a raw pointer used in multiple threads?

I'm trying to implement ODBC functions. One of them has this signature in C:

      SQLSMALLINT   HandleType,  
      SQLHANDLE     InputHandle,  
      SQLHANDLE *   OutputHandlePtr);  

This will be called from outside of Rust.

This basically creates handles of a specified type and returns pointers to them in OutputHandlePtr at which point my Rust code "forgets" about them (unsafe operation) so it now lives in C land.
At some later point my code might be called again with that handle as an InputHandle.

According to the specification I now need to make sure to synchronize access to this handle.

This is how it is implemented in the PostgreSQL ODBC driver:

#define ENTER_ENV_CS(x)		pthread_mutex_lock(&((x)->cs))
#define LEAVE_ENV_CS(x)		pthread_mutex_unlock(&((x)->cs))
			ENTER_ENV_CS((EnvironmentClass *) InputHandle);
			ret = PGAPI_AllocConnect(InputHandle, OutputHandle);
			LEAVE_ENV_CS((EnvironmentClass *) InputHandle);

This is my very first time using unsafe and I'm not a Rust expert to begin with.
I'm wondering how I can replicate the ENTER_ENV_CS (CS = CriticalSection) / LEAVE_ENV_CS logic.

In Rust my signature looks like this:

pub fn SQLAllocHandle(
    handle_type: HandleType,
    input_handle: *mut c_void,
    output_handle: *mut *mut c_void,
) -> SqlReturn {

And I can easily wrap input_handle in a Mutex but this method can be called multiple times in parallel from multiple threads and all of them would end up creating their own Mutexes.

My only idea so far is to have the synchronization object be part of the Handle I return.

struct EnvironmentClass<T> {
    inner: Mutex<T>

But I wonder if there is a more direct translation of the C code.

Were I in your shoes I'd simply make EnvironmentClass !Send and !Sync. That would push responsibility for the Mutex to the user of your code. In my mind, that's were it belongs.

This is about implementing a function that is specified as requiring Send and Sync, though:

On operating systems that support multiple threads, applications can use the same environment, connection, statement, or descriptor handle on different threads. Drivers must therefore support safe, multithread access to this information; one way to achieve this, for example, is by using a critical section or a semaphore.

In this case, wrapping the internals of each handle in a Mutex seems to be the correct way to go.

You won't be able to do the same kind of tricks as the PostgreSQL ODBC driver does with C, though, where the connection structure begins with an environment structure, the layout is predictable, and the environment mutex can be used for threadsafe access to the connection fields. In this case, I don't really see a use for the <T>, and I would expect something more like

struct EnvironmentClass {
    inner: Mutex<EnvironmentClassImpl>

defined separately for each of the handle types.

1 Like

"Information" not "function". Protecting the information is as simple as Mutex<EnvironmentClass>. Which is then Send and Sync.

The Mutex is only necessary if the handle is accessed by multiple threads. If the handle is accessed by a single thread the Mutex is just overhead. Moving the Mutex to the outside gives the user the choice.

Thank you both!

"The user" doesn't have a choice as this is a ODBC driver which is being called by all kinds of applications via the Windows driver manager and I can't change those. I have to assume that multiple threads access the handle at all times.

But I guess I could just return a Mutex<EnvironmentClass> as my handle to begin with?

The reason for that is part of another question (that I can ask separately later as well) on how I can somehow package all of this up as a sort of "library" for others to reuse but for now you're correct.

That looks like it would work fine. Whether you want to use the Mutex itself or use a new-type wrapper or a type alias will depend on what the rest of the code looks like, and personal taste.

Thanks! I don't know yet. I'm learning as we go along and this was already very helpful.

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.