Statically Linking Parts of a Shared Library?

Thanks, progress but still a nonzero return code:

The cargo build command should've output a list of libraries that you need to link, like so:

note: link against the following native artifacts when linking against this static library
note: the order and any duplication can be significant on some platforms, and so may need to be preserved
note: library: dl
note: library: rt
note: library: pthread
note: library: gcc_s
note: library: c
note: library: m
note: library: rt
note: library: pthread
note: library: util

You need to pass each of those using -l.

Cargo doesn't emit anything like this, running cargo build --lib -vvv

IIRC you must pass --print=native-static-libs to cargo build to get this output on recent toolchains.

Also, make sure you clean before doing this - it will only output on actual builds.

Cargo complains: Unknown flag: --print:

$ cargo --version
cargo 0.25.0 (96d8071da 2018-02-26)
$ rustc --version
rustc 1.24.1 (d3ae9a9e0 2018-02-27)
$ cargo build --print=native-static-libs --lib
error: Unknown flag: '--print'

    cargo build [options]

Ah, sorry, it's apparently a rustc flag, so you need to use RUSTFLAGS or cargo rustc.

Off a clean build:

note: native-static-libs: -lpython3.4m -lutil -lutil -ldl -lrt -lpthread -lgcc_s -lc -lm -lrt -lpthread -lutil -lutil
gcc -Wl,--whole-archive -lpython3.4m -lutil -lutil -ldl -lrt -lpthread -lgcc_s -lc -lm -lrt -lpthread -lutil -lutil -shared ~/.cache/cargo/target/debug/liblambda.a  -o ~/.cache/cargo/target/debug/


/usr/bin/ld: /home/vagrant/.cache/cargo/target/debug/ version node not found for symbol SSLeay_version@OPENSSL_1.0.1
/usr/bin/ld: failed to set dynamic section sizes: Bad value

I have abandoned trying to make a static library in favor of a shared library which static links in OpenSSL. I'm now getting some really weird issues:

  OPENSSL_LIB_DIR=/usr/lib64 \
  OPENSSL_INCLUDE_DIR=/usr/include/openssl \
      -lcom_err \
      -lc \
      -ldl \
      -lgssapi_krb5 \
      -lk5crypto \
      -lkrb5 \
      -lkrb5support \
      -lpcre \
      -lpthread \
      -lresolv \
      -lselinux \
      -lz \
    " \
    cargo build --lib --release

Interestingly enough, I can compile my library:

extern crate openssl;

fn init() {

However, as soon as I try to include rusoto, I get compiler errors:

extern crate rusoto_core;
extern crate rusoto_kms;

use rusoto_core::region::Region;
use rusoto_kms::Kms;
use rusoto_kms::KmsClient;
use rusoto_kms::DecryptRequest;

impl SecureConfig {

    pub fn new() -> Self {
        SecureConfig {
            client: Arc::new(KmsClient::simple(Region::UsEast1))

    /// Decrypt and return the Google OAuth Client Secret
    pub fn google_oauth_client_secret(&self) -> String {
            self.client.decrypt(&DecryptRequest {
                ciphertext_blob: Vec::from(Config::enc_google_oauth_client_secret().as_bytes()),
                encryption_context: None,
                grant_tokens: None

    /// Decrypt and return the Google OAuth Signing Token
    pub fn google_oauth_signing_token(&self) -> String {
            self.client.decrypt(&DecryptRequest {
                ciphertext_blob: Vec::from(Config::enc_google_oauth_signing_token().as_bytes()),
                encryption_context: None,
                grant_tokens: None

Now, when I compile:

  = note: /usr/bin/ld: /home/circleci/project/target/release/deps/ version node not found for symbol SSLeay_version@OPENSSL_1.0.1
          /usr/bin/ld: failed to set dynamic section sizes: Bad value
          collect2: error: ld returned 1 exit status

I'm kind of at the end of my rope here. Haven't yet spent this much time in linker hell.

I suppose musl might be a requirement at this point. This gets back to my original inquiry: is there a way to statically compile a shared library so that Python can still use it, yet bake in all dependencies in statically? I don't want to keep fussing around with linker flags.

I don't have an answer to your specific question, but I believe I was faced with a similar issue when writing my AWS Lambda function in Rust (to be used by the Python 3.6 executor).

For this lambda, I used the crowbar crate. This sets up the Python binding for you so you only need to provide the Lambda entrypoint.

As you discovered, OpenSSL is a pain there. This was reported in crowbar issue 20. I guess you got the same error?

For me what fixed the issue was to use a Docker image that mimics as much as possible the AWS Lambda environment. The company behind this effort (LambCI) "scraped" the environment by dumping the whole Lambda host to an S3 bucket and building a Docker image from that (with minimal modifications). See

Using this image as base you can then add what extra stuff you need in a Dockerfile, for example installing rustup. Pushing that image to the docker hub or a different image registry will then allow building the app in a CI environment.

See my Dockerfile for an example of how to build a working image for AWS Lambda and Rust.

Hopefully that will help you! Let me know if if does or not. Good luck!


I have built a Docker image as a build environment :smile: and have been using that. Here's the specific issue: the version of OpenSSL that comes in the Amazon Linux AMI/Docker Image at boot is 1.0.1. Upgrading or installing virtually any package results in OpenSSL being upgraded as everything now depends on 1.0.2 and up.


  1. I can't build a Docker image with OpenSSL headers or other tools because installing or upgrading anything results in OpenSSL being upgraded.
  2. Since installing/upgrading anything results in an OpenSSL upgrade, my build environment is skewed; the AMI comes up with OpenSSL 1.0.1 and things freak out.
  3. I can't pin the OpenSSL package because most software on the system is now dependent on >= 1.0.2.
  4. I can't install older OpenSSL headers because Amazon Linux's repository no longer contains older versions.

This whole situation feels like a gigantic clusterfsck.

I'm hoping to open source some significant contributions to Crowbar, which is an awesome bridge to Lambda. I built static routing for like six different types of events that can be received over Lambda. Amazon bills at minimum 100ms CPU time, my Rust was finishing in 400 microseconds, so I'm getting billed orders of magnitude more than I'm using :smile:

Unfortunately, this inconsistent environment is causing some significant problems, I've spent the last week smashing my head against the wall on it.


If there is a way for me to produce a static library, transform it into a shared library that only links against libc, pthread, etc. and static compiles everything else, that's what I need. I don't want to keep flipping bits with linker flags, as it's really brittle.

I totally agree that rust on Lambda makes a lot of sense. Rust is so fast that we are getting charged way more than what we really use! :smiley: With AWS pricing, the CPU and memory are tightly coupled: You select how much memory you want/need, and the CPU power will be adapted accordingly. So less memory used and faster execution time means you can select the cheapest plan without trouble.

I did faced the same issue with Amazon Linux's OpenSSL version (1.0.2) not matching the one included in the Lambda environment (1.0.1). The problem comes when building the application in AMI (using OpenSSL 1.0.1) and running in on Lambda (using OpenSSL 1.0.2).

The "fix" I'm talking about is to not use AMI but use lambci/docker-lambda docker image instead. This Docker image is identical to the Lambda environment, including the same OpenSSL version (modulo some small changes). This way you can be sure to get the exact same version of libraries, so no discrepancy.

Mixing static and dynamic libs is not trivial. A static library is (almost) simply a concatenation of the built object files, while a dynamic lib has a different format (elf). Not sure of a way to easily mix those two...

1 Like

I'm really sorry, but I think I gave you bad advice. While building a static library and then compiling that to a shared library is possible, it shouldn't really be necessary for your use case. Thus, the setup that gives you a "version node not found for symbol" error really ought to work. I'll see if I can figure out the cause…

That's the issue.

You say this in your naftulikay/rust-openssl-static-example's README:

We use a Docker container running the Lambda version of Amazon Linux for builds. This makes the execution environment as close to the build environment as possible.

This actually is not the closest you can get to AWS Lambda. I fell into the same trap. From what I could find, Lambda is not available like AMI is. You can use lambci's docker image instead. This one is (almost) identical to AWS Lamda; it's a tar of the full filesystem, saved to S3 and extracted to a Docker image.

Take a look at it :wink:

1 Like

I'm so glad to be talking with someone who has experience with this! According to Amazon, there's a specific amazonlinux tag corresponding to the Amazon Linux version that Lambda runs, but no packages are pinned so I suppose I'm screwing it up there.

Here is my Dockerfile:

FROM amazonlinux:2017.03.1.20170812

MAINTAINER Naftuli Kay <>

ENV RUST_USER=circleci

  autoconf \
  automake \
  bash-completion \
  cmake \
  curl \
  file \
  gcc \
  git \
  jq \
  libtool \
  make \
  man \
  man-pages \
  pcre-tools \
  pkgconfig \
  python-pip \
  python34-pip \
  sudo \
  tree \
  unzip \
  wget \
  which \
  zip \
  glibc-static \
  openssl-static \
  pcre-static \
  zlib-static \

  binutils-devel \
  openssl-devel \
  kernel-devel \
  libcurl-devel \
  libffi-devel \
  pcre-devel \
  python-devel \
  python34-devel \
  xz-devel \
  zlib-devel \

# upgrade all packages, install epel, then install build requirements
RUN yum upgrade -y > /dev/null && \
  yum install -y epel-release >/dev/null && \
  yum install -y ${_TOOL_PACKAGES} ${_STATIC_PACKAGES} ${_DEVEL_PACKAGES} && \
  yum clean all

# install and upgrade pip and utils
RUN pip-3.4 install --upgrade pip setuptools && pip-3.4 install awscli

# add ldconfig for /usr/local
RUN echo '/usr/local/lib' > /etc/

# create sudo group and add sudoers config
COPY conf/sudoers.d/50-sudo /etc/sudoers.d/
RUN groupadd sudo && useradd -G sudo -u 1000 -U ${RUST_USER}

# add rust profile setup
COPY conf/profile.d/ conf/profile.d/ /etc/profile.d/

# deploy our tfenv command
RUN install -o ${RUST_USER} -g ${RUST_USER} -m 0700 -d ${RUST_HOME}/.local ${RUST_HOME}/.local/bin
COPY bin/tfenv ${RUST_HOME}/.local/bin
RUN chmod 0755 ${RUST_HOME}/.local/bin/tfenv && \
  chown ${RUST_USER}:${RUST_USER} ${RUST_HOME}/.local/bin/tfenv

# install rustup
RUN curl -sSf | sudo -u ${RUST_USER} sh -s -- -y && \
  ${RUST_HOME}/.cargo/bin/rustup completions bash | tee /etc/bash_completion.d/rust >/dev/null && \
  chmod 0755 /etc/bash_completion.d/rust && \
  rsync -a ${RUST_HOME}/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/share/man/man1/ /usr/local/share/man/man1/

# degoss the image
COPY bin/degoss goss.yml /tmp/
RUN /tmp/degoss /tmp/goss.yml

ENV ["PATH", "/home/${RUST_USER}/.cargo/bin:/home/${RUST_USER}/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"]

CMD ["/bin/bash", "-l"]

The upgrade commands are what mess everything up.

If I just change my base image to lambci/lambda and the appropriate tag, will that do everything I need or do I still need to not install any packages for fear of updating OpenSSL?

Okay, after some spelunking – if you do still want to link OpenSSL statically, this bit of magic will make it work:

RUSTFLAGS="-C link-arg=-fuse-ld=gold -C link-arg=-Wl,--exclude-libs=ALL"

In short, the linker is getting confused because it thinks the output shared library might want to re-export the symbols from libssl/libcrypto, but the version script supplied by rustc doesn't mention the right versions…

I'm not sure what the root cause is, though.

Update: It looks like this versioning stuff comes from a distro patch to OpenSSL; it's not in any version of upstream. And I get the same error with a bog-standard compiler invocation to compile a C file to a shared library, static linking to OpenSSL:

# note: openssl-1.0.2n/ is patched with the distro patch
$ gcc -shared -o /tmp/ wat.c -Wl,-Bstatic -Lopenssl-1.0.2n  -lssl -lcrypto -Wl,-Bdynamic -fPIC  
/usr/bin/ld: /tmp/ version node not found for symbol SSLeay@@OPENSSL_1.0.1
/usr/bin/ld: failed to set dynamic section sizes: Bad value
collect2: error: ld returned 1 exit status

So I think this is a fundamentally a mistake on the distro's end.

Therefore, another way to avoid the issue would be to compile your own OpenSSL to statically link against, rather than using the existing one.

1 Like

Thanks for playing around with this. I'm not surprised that it's something like that. Lambda should have anticipated security and major version releases and provide a customizable "base image" to use, providing users a way to test and update to latest code. FWIW: Amazon Linux for Lambda is still using OpenSSL 1.0.1. Were security patches applied? Hard to know.

Again, thank you all so much. I finally have a working image which meets my requirements:

Please see the credits in there for the recognition y'all deserve. Thanks so much for the help!

1 Like

Great! Glad I could help. That description is epic :wink: