For People Learning Prompt Engineering
This is the most underrated use case. Prompt engineering isn’t just a job skill anymore, it’s becoming a fundamental literacy for anyone working with AI. And like any skill, the best way to learn it is by doing it under pressure, with real feedback. Sui Sentinel gives you that environment.You want to actually get good at prompting, not just read about it
You want to actually get good at prompting, not just read about it
Most people learn prompting by trial and error in a chat window with no stakes and no feedback loop. You don’t know if you’re improving. You don’t know if your techniques actually work against a well-defended system.On Sui Sentinel, every attack attempt gives you real signal. Did your prompt work? Did the AI hold? What did it respond with? You’re learning against real systems, defended by real prompt engineers, with a financial incentive to improve. That’s a completely different level of practice than anything else available.Think of it as a gym for your prompting skills — except the gym pays you when you win.
You want to understand how AI systems are built and defended
You want to understand how AI systems are built and defended
The defender side is just as educational. When you write a system prompt for your Sentinel, set an attack goal, and configure a jury — you’re doing real AI engineering. You’re thinking about edge cases, adversarial inputs, and how to express rules that an AI will actually follow under pressure.Watching how attackers try to break your Sentinel teaches you more about prompt robustness than any tutorial. You see exactly where your instructions were ambiguous, where an attacker found a gap, and what you’d do differently next time.
You want a verifiable record of your prompt engineering ability
You want a verifiable record of your prompt engineering ability
Job postings ask for “prompt engineering experience.” Interviews ask you to describe it. But there’s no standard way to prove it.Every attack you succeed at on Sui Sentinel is permanently recorded on-chain, tied to your wallet address. A track record of successful jailbreaks against well-defended Sentinels is a stronger signal of real skill than anything you can put in a resume bullet point. It’s proof you can actually do it — not just that you say you can.
For Early Earners — Attackers and Defenders
The financial opportunity is real, and it’s early. The people building prompt skills right now and establishing themselves on the platform are the ones who will have the biggest edge as more capital flows in.Earn as an attacker — get paid to break AI systems
Earn as an attacker — get paid to break AI systems
Every time you successfully break through a Sentinel, you win the entire reward pool — paid out instantly on-chain. No invoices, no intermediaries, no waiting.The reward pool grows with every failed attack, because 50% of each fee flows back in automatically. A Sentinel that’s been heavily attacked and held strong becomes increasingly valuable to break. The harder the challenge, the bigger the prize.You don’t need a security background to start. You need curiosity, creativity, and a willingness to think about how language can be used to manipulate AI systems. Those skills develop fast on the platform — and the earnings reflect how good you get.
Earn as a defender — get paid just for deploying
Earn as a defender — get paid just for deploying
Defenders earn 40% of every attack fee, regardless of whether the attack succeeds. That means every attempt someone makes against your Sentinel is income for you.The longer your Sentinel holds and the more attackers it attracts, the more you earn. A well-crafted system prompt that genuinely resists attack doesn’t just protect your AI — it generates continuous revenue. Defenders also earn SENTINEL token rewards proportional to their pool size, so early participants benefit from getting in while the platform is growing.
You want to build reputation in a field that's moving fast
You want to build reputation in a field that's moving fast
AI security is not a mature field with established career paths. It’s being built right now, by people who are figuring out the techniques in real time.Getting on the leaderboard, building an on-chain track record of successful attacks, and establishing a reputation as a strong defender or attacker — all of that matters more today than it will in three years when the field is crowded. Early movers in a new technical discipline always have the advantage.
For Companies and Researchers
You're shipping an AI product and need to know if it's actually safe
You're shipping an AI product and need to know if it's actually safe
Deploy your AI as a Sentinel before launch with a bounty attached. Let the global security community try to break it using real conversation techniques. If thousands of skilled people can’t get through, you have documented, real-world evidence of robustness — not just internal assurances.If someone does break through, you find out now, in a controlled environment, before it affects your users.
You need auditable proof of AI security testing for compliance
You need auditable proof of AI security testing for compliance
Every attack on Sui Sentinel is recorded on-chain — how many attempts, over what period, and what the outcomes were. That’s a permanent, verifiable log that supports compliance conversations around GDPR, SOC 2, or enterprise due diligence. Evidence, not assurances.
You want to contribute to AI safety research
You want to contribute to AI safety research
Discovering a new attack technique on Sui Sentinel doesn’t just earn you a bounty — it puts that technique on the record for the entire AI industry to learn from. It’s meaningful safety research, compensated directly, without needing institutional backing.
How the Incentives Work
The longer a Sentinel goes unbroken, the bigger the reward pool grows — because 50% of every
failed attack flows back in automatically. Defenders earn continuously. Attackers chase a growing
prize. The platform gets more valuable the more people participate.

