I built skilllite , a lightweight sandbox for running untrusted AI code safely using Rust (namespaces + seccomp).
A Question for the Community:
Here is a puzzle I'm trying to solve: We have 2,000+ clones in 14 days, proving the utility, but only ~48 stars . The conversion rate is surprisingly low.
I suspect my README or onboarding flow might be missing something.
For those of you who maintain popular Rust crates:
What usually convinces you to hit the Star button after cloning a repo?
Is my value proposition clear enough?
Are there any "friction points" in my documentation that stop users from engaging?
I would genuinely appreciate your honest feedback and advice on how to bridge this gap. Every suggestion will be implemented!
I’d love a code review . Is my security model solid? Any Rust idioms I missed?
That amount of stars is higher than usual on GitHub as far as I remember.
After all, people are downloading your crate for all kinds of reasons - archives, intent to check later, simple interest, using it, etc. They might not get to starring the repo if they're still playing with the module; nor would most say anything when they cloned it, found useless for their purposes and erased it.
Haha, you caught me worrying about the numbers! You're right, the 'silent majority' downloading and testing is way more important. I'll keep building, and hopefully, some of those 'later' intents turn into stars eventually. Thanks for the reality check!
Hey @EXboys, your messages here read a bit too strongly LLM-flavored[1]: please make sure to see this forum as a place of direct human-to-human communication as much as possible, and refrain from using AI for generating or even just “improving” any of your own words shared here. (In case there’s a language barrier, I’d suggest typing things out in another language and using more traditional translation tools, or asking an LLM for a very direct translation to keep the AI from becoming a middle man, adding extra verbosity, or or just eliminating most of your own personality from your writing – which you certainly must have ≽^•⩊•^≼ .)
Maybe not worth saying, but if it's a sandbox specialized for AI Agent evolution, I'd suggest posting at a forum about that topic, as it's not familiar to me, and I don't understand the README and examples.
Why on Earth are you worrying about stars? Stars are useless. Stars do not do anything. Stars are only decoration. Downloads on the other hand are real action. As are bug reports and pull. requests.
I initially created a lightweight sandbox based on the Claude framework. Later, I found that the best application scenario for sandboxes in AI is self-evolution (evolutionary exploration can be performed within the sandbox, ensuring safety and controllability). Therefore, I added self-evolution logic. I originally planned to split it into two projects, but since we're just starting out, I'm maintaining it in one project. Because I've been iterating very quickly lately, the documentation hasn't caught up yet.
The sandbox can be used independently (I've integrated it with Python and MCP), or it can be used in conjunction with an agent.
You're right, my documentation isn't very good and needs optimization. Thank you.
Thanks for the reminder. Since the project has a high star count on GitHub, I wanted it to get noticed. Today I noticed it has over 2000 downloads, but the stars haven't increased. That puzzled me.
I asked some friends, and apparently a top internet company is internally researching my code and solutions. That's why the number of git clones has surged.
I applaud your efforts... What an especially valuable crate at this point in time!!!
Unfortunately Github requires that you be signed in to give stars, which does make sense to avoid abuse, but translates to getting far fewer stars than you deserve.
Please be a little patient... This looks like a spectacular crate and I'm so glad you've made it and are sharing it! The stars will come!
I agree, do not rush, and check stars after a year. You will be surprised how good your solution and useful for thousand Rust programmers. Trust me.
BTW I never put stars on any project on GitHub (but use a lot of them), regardless how good they are. My believe that people work not for stars, people work for the future.
Thank you for your support. In the future, I plan to explore and conduct deeper research in the areas of AI safety and self-evolution — a challenging yet fascinating endeavor. I look forward to more contributors joining along the way.
Thanks, I really appreciate that perspective. It’s a good reminder to focus on the long-term value rather than short-term validation. And interesting point about stars — I like the idea of working for the future.