Hi all,
I'm currently working on a refactoring of a C++ ODBC API library. Now I noticed I have problems with a certain function argument (which is a mutable pointer) that should be changed by the ODBC driver to match the columns result set but which is not.
Please have look the the source codes comments, as I describe my problem in them, too.
// Inside the extern C definitions:
pub(crate) fn SQLBindCol(
statementHandle: SQLHSTMT,
columnNumber: SQLUSMALLINT,
targetType: SQLSMALLINT,
// the ODBC interface stores the value of this
// DB column result set in this buffer (*mut c_void)
targetValuePtr: SQLPOINTER,
// tell the ODBC interface how big my buffer is
bufferLength: SQLLEN,
// this one will be set by the ODBC interface, indicating if the
// column data is either NULL, or how much bytes are stored within
// the targetValuePtr buffer.
strLen_or_indPtr: *mut SQLLEN,
) -> SQLRETURN;
The problem is, when I call SQLBindCol
with a desired column, and then fetch the result set with SQLFetch
, I noticed that strLen_or_indPtr
keeps its value from the initialization:
/// Stores the information retrieved by calling `SQLDescribeCol`
#[derive(Debug)]
pub struct ColumnDesc {
name: String,
data_type: ColDataType,
size: u64,
decimal_digits: i16,
nullable: bool,
col_idx: i16,
}
/// My type to store the fetched data from the ODBC interface
#[derive(Debug)]
pub struct ColumnBuffer {
// see above
desc: ColumnDesc,
// the actual data from the DB column
buffer: Vec<u8>,
// how many bytes are set in the buffer (-1 if none)
data_len: SQLLEN,
}
pub fn bind_columns(&self, columns: Vec<ColumnDesc>) -> Result<Vec<ColumnBuffer>> {
let mut column_buffers = Vec::new();
for column in columns {
let mut buffer = ColumnBuffer {
buffer: vec![0; column.size as usize],
desc: column,
data_len: 0,
};
let res: OdbcResult = unsafe {
ffi::SQLBindCol(
*self.0,
buffer.desc.col_idx as SQLUSMALLINT,
buffer.desc.data_type.into(),
buffer.buffer.as_mut_ptr() as *mut c_void,
buffer.buffer.len() as SQLLEN,
// here I pass a mutable pointer to data_len
&mut buffer.data_len,
)
}.into();
// error handling ... unimportant for now
columns_buffers.push(buffer);
}
Ok(column_buffers)
}
// ...and calling it
// ... SQLExecDirect(...);
// ... SQLNumResultCols(...);
// ... SQLDescribeCols(...);
let mut buffers = stmt.bind_columns(columns)?;
while !stmt.fetch().failed() {
for buffer in &mut buffers {
// here I can see the buffer.buffer changes, but buffer.data_len not
println!("{}", buffer);
buffer.flush();
}
}
Example output:
ColumnBuffer {
desc: ColumnDesc { name: "PARTGROUP", data_type: BigInt, size: 20, decimal_digits: 0, nullable: true, col_idx: 1 },
buffer: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
data_len: 0
}
// In fact, the column value for the fetched record is in fact null, but I can't determine that here. Because
// the buffer contains zeros either way if `PARTGROUP = 0` or `PARTGROUP = NULL`. I need `data_len` to be
// set to `-1` in this case.
Now I don't understand why ColumnBuffer.buffer
gets mutated by the ODBC interface, but ColumnBuffer.data_len
not. Has this maybe something to do with the heap allocated value vs. the stack allocated value?
Let me know if you need more source code.
Thanks in advance.