How to prevent the reverse developer from viewing and modifying the string written in the program?

Because I want to store the private key for decryption in the Rust program, I don't want others to see it. It would be better if you can prevent viewing the decrypted string in dynamic debugging!

fn main() {
    println!("Hello World!");
}

For example: I output a piece of text in the above program, which can be directly found and modified during static analysis. How can I prevent reverse developers from modifying it?

My current solution is as follows, but if it is done this way, it seems to be optimized by Rust?

pub fn decrypt_builtin_project(bytes: &mut Vec<u16>, content: &mut String) {
    for value in &mut *bytes {
        *value ^= 3;
    }
    *content = String::from_utf16(&bytes[..]).unwrap();
}

main() {
    let mut key = vec!(6, 6, 6, 7);
    let mut content = String::new();
    decrypt_builtin_project(&mut key, &mut content);
}

Is there a better solution?

I have no choice, so I put the key in the program, because what I do is a paid software, hoping to prevent others from cracking it. If I put the key in the server, it is also problematic. For example, do they capture packets and perform man-in-the-middle attacks?

I wonder if anyone has done the anti-debugging function?
For example: If the execution time from the previous line to the next line is too long, it means that someone is debugging my program, and then my program is directly exited or destroyed.

thanks for your replies.
After careful consideration, I no longer put the key in the program.
In order to make a fortune, we plan to put encryption and decryption on the server.

1 Like

Whoa! In some crazy coincidence, you just literally asked the question 1 min ago that I was literally comming onto this forum to ask. :astonished:

2 Likes

Never store keys in binaries.

So many security vulnerabilities have been found due to this.

13 Likes

Before we even think about how to obfuscate keys in binary executables consider this:

In order to get the key into the binary in the first place it must be in your source code or build system somewhere, somehow.

That likely means it gets backed up with all your source code backups. It gets saved in your git or other source code management system.

Now that key is all over the place. Anything but a secure way to carry on.

3 Likes

This is hard at best. See game programming for lots of people trying to do it, often failing, and even more often using approaches to try to prevent it that get people very mad because they amount to rootkits against the machine.

Why do you need the key there? Can you have anything requiring it happen in a web service instead, for example?

9 Likes

I have no choice, so I put the key in the program, because what I do is a paid software, hoping to prevent others from cracking it. If I put the key in the server, it is also problematic. For example, do they capture packets and perform man-in-the-middle attacks?

Anything you can write in normal source code and compile with a normal compiler can be cracked in a few minutes by someone with basic debugging skills. They just need to find the function that uses the key, set a breakpoint there, and read the unobfuscated key. That's better than having the key visible with strings, but don't treat it as protected.

If you want your app to frustrate crackers for at least a few hours, you need more advanced anti-debugger techniques, like these:

https://hot3eed.github.io/snap_part1_obfuscations.html

14 Likes

This was (I think) the point @scottmcm was trying to make: that is a false hope because even the nastiest DRM schemes are defeated, in addition to sort of becoming malware and/or spyware themselves at some point.
It's a bad idea in principle, and there's real-world data available that suggests that putting it into practice doesn't even deliver the desired results.

Even something like making the program dial home has its drawbacks. What for instance if the user is on a machine with a spotty (or no) internet connection?

7 Likes

Reality is that if you put the key in the binary then the key is in the binary, there for anyone to get out.

You can obscure the key in the binary so that it is not easily read with "strings" or whatever tool, and have the code in the program in the binary deobscure it. But now whoever has the binary has the obscured key and the code to deobscure it.

One likely way this might stall attackers for a long time is to write your code in C and compile it with the "movfuscator" movfuscator/README.md at master · xoreaxeaxeax/movfuscator · GitHub

Then every instruction in the binary is a MOV. Extracting any control flow out of that with disassemblers becomes very hard for the attacker.

Of course the price is huge and slow executables...

But, at some point during the execution of your code the key must be in a clear, usable state to work. BOOM, there it can be discovered.

9 Likes

It sounds like you need some kind of public/private key system. You can then store the public key inside the binary and sign authorizations with a private key stored on a server somewhere. If a leaked authorization makes it into the wild, you can look for it in your issuing logs to determine who it was issued to, and then refuse to do business with them in the future. To limit the damage that a leaked authorization can cause, you can make them expire after a week, month, or year.

10 Likes

If you're communicating with the server over HTTPS or otherwise using TLS, you don't have to worry about this.

4 Likes

It's not clear to me how this helps.

Anyone can clone your program, and any associated data/configuration. And likely the whole system it runs in. Public/private key and all.

So now, if that clone system connects to your servers, how do you know which is the genuine original and which is the clone?

The best we have to defeat this today, as far as I can tell, are the chips in our credit cards and phone SIMs. Basically devices designed to be very hard to reverse engineer where all the secret stuff can be stored.

They can’t clone the private key, because it’s kept securely on your server. What they can clone is an authorization signed with the private key. This will indeed give access until the authorization expires, but multiple users of the same authorization can be detected via access patterns and monitoring crack-distribution websites.

Once that happens, you take business-level remedies against the accountholder that was originally issued the duplicated credential. It’s not a perfect system by any means, but it’s a reasonable check to keep honest people honest and limit the damage from leaks. If your software is genuinely useful, what percentage of customers will risk a lifetime ban to provide a key to the community that expires within the week?

This is certainly true, but overkill for the vast majority of software products. Trust but verify is a reasonable approach: There will be some shrinkage, as in any other retail environment, but it can often be kept at managable levels.

6 Likes

Well there is a thing.

In my my current project remote embedded systems are sending sensor data to our servers. All encrypted and identified by TLS certs. Just as you describe.

We don't much care about people copying our code, after all it is nothing special. What we don't want is people cloning the the thing and using the clone to inject bad data into our system.

If that were to happen we could no doubt detect it. But then what? Somehow we could determine the difference between the clone and the real thing. Maybe.

It's not so simple as take business-level remedies against the accountholders. They are just computers out there.

Currently the protection against this is physical. Those systems are in locked cabinets in inaccessible places. But hey, what protection is that? My colleague already has a master key that will open all of them.

It keeps me awake at night.

It worries me even more that the next step in the plan is to use those systems to command and control other devices they are connected to, rather than just report status as they do now.

2 Likes

It sounds like your application is way outside the domain which I assumed in my suggestion. If I had to design a high-security setup for this, it would probably be along the lines of intrusion detection for the cabinet. This can automatically silo data ostensibly coming from the potentially-compromised device until an appropriate set of people audit what happened while the cabinet was open.

Part of the audit process could be to send a new device key, after the interior of the cabinet is verified to be wiretap-free. In truly suspect cases, destroy the potentially-compromised equipment and replace it with a new, trusted, requisition.

1 Like

Yeah, that's why I'm trying to do it. Honestly I'm not super worried about people hacking it. I just found out recently how super easy it is to reverse-engineer a game when I did it for a game of mine that ran on my Windows machine, but needed a hack to get it to work on my Linux machine with Wine. It was trivial. Like this guy who wrote the assember course I read said, "once you understand the assembly, you can completely control the program".

The general consensus here is pretty much what I thought: it's a piece of cake for anybody ready to use debugging tools ( like cutter is awesome by the way ) to get to your binary data.

I'm really just looking for something that will keep the average dude from browsing my game assets which are protected by Copyright law anyway. Honestly, at this point I'm feeling like I just store my assets in a zip file, tack 48 random bytes to the beginning and end of the file so that the OS won't recognize it right off, and then change the extension to .bin.

Still, having a key in the binary would make it more secure because it would actually take a debugger to find it, but at this point it's just a question of how easy I want to make it.

If your device has a secure enclave, which all modern computers, phones, and tablets do, you can put your private key there and use it to sign things without exposing it. I did that with the Trusted Platform Module (TPM) on a laptop a number of years ago.

Intel boxes also have SGX, an environment where you can decrypt your private key without exposing it outside the private environment, even to the OS. That's the claim; the reality is a good deal less secure, but it may be adequate for your uses.

4 Likes

Life's too short to try to stop crackers. :slight_smile: It's a cat & mouse game and you won't win.

Use your time and effort to improve your app and support your users rather than spend it all on protecting the app. Assume most of your users will want to pay for it. The best that you can do is to use some distribution platform/app store which will do the purchase verification for you. Not perfect but the best value/effort ratio.

You can go down the "always online" route and implement most of your app logic on a server and let your app be just a dumb client. You could then do the verification on the server but depending on what you're selling it might not be cost effective to run your servers. Keeping things upgraded and backwards compatible is non-trivial. When something breaks your paying users won't be happy so again it's a matter of what what you want to focus on.

4 Likes

Zip files tend to have extra leading and trailing data quite often, so that doesn't work so well in this specific case. If I just wanted to make it a bit harder, I'd use an AES-encrypted Zip file, and store the key in binary, maybe XOR-ed with 0x55 to make it even more obscure.

2 Likes

What about using the TPM to store the private key and building a Trusted Execution Environment or do the Attestation from that point?

https://blog.hansenpartnership.com/using-your-tpm-as-a-secure-key-store/