This should not (so much) be compared with Fully Homomorphic Encryption (FHE) but with a Trusted Execution Environment (TEE). It is a very elegant and minimal way to implement TEEs, but suffers from the same drawbacks: a data owner has to trust the service provider to publish the public keys of actual properly constructed Mojo-V hardware rather than arbitrary public keys or public keys of maliciously constructed Mojo-V hardware.
For me, I see Mojo-V more like FHE than a TEE, for three primary reasons: 1) Like FHE, the tech is applied to variables and computation that doesn't touch protected variables is not affected. TEEs protect processes. 2) Like FHE, Mojo-V lacks software, timing, and microarchitectural side channels. TEEs are riddled with side channels. 3) Like FHE, no trust is extended to software because it cannot see the data it is processing. TEEs require that clients trust that the attested software has their best interests in mind.
Public key signing is like SGX, the vendor signs the public to certify that it is from real Mojo-V hardware.
You could have the keys signed by a chip maker, which cuts the hosting provider out and reduces the trust surface to the manufacturer only. Unless your adversary is someone sophisticated enough to do surgery on chips.
It’s still not FHE but it’s about as good as you can get otherwise.
> Unless your adversary is someone sophisticated enough to do surgery on chips.
Since the threat assessment is important for deciding the strength of countermeasures, let me just add that this isn't as uncommon as you may believe. A company that I worked for had a decent capability to do this, and they were using it just to investigate the failures of electronic subsystems in their projects. Imagine what a more dedicated entity could achieve. This is why standards like FIPS 140-2/3 level-3/4 are very relevant in a significant number of corporate cases.
Talking about chip surgeries, I wish our distinguished expert Ken Shirrif could throw some light on the process. His work on legacy chips is one of the most noteworthy in the field.
I agree that side channel and physical attacks are crucial to stop. The predecessor to Mojo-V (Agita Labs TrustForge) was red teamed for three months including differential physical measurement attacks, and the system was never penetrated. So where there is a will there is a way!
Mojo-V stops software, inst timing, microarchitectural, and ciphertext side channels. Vendors can stop analog attacks if they choose to, but the reference design, which I am building, is meant to be really simple to integrate into an existing RISC-V core. Adding Mojo-V only requires changes to the Instruction Decoder and the Load-Store Queue, regardless of the complexity of the microarchitecture.
Yes exactly, because it is a privacy tech, the key/control channel tunnels through all software into the Mojo-V trusted H/W.
In the spec, I've been working on new Appendices comparing Mojo-V to TEEs, FHE, CHERI, and other high security tech. Mojo-V is a new thing, so absorbing it will take a while! :-)
I see it as a new design point between TEEs and FHE but much closer to FHE. TEEs are fast but they are not good at establishing trust with untrustworthy service providers, FHE is the ultimate in zero trust as all trust is in the math. Mojo-V eliminates all software, programmer, IT staff, attacker, malware trust with trusted hardware, and it runs near native speed.
And yeah, my mission is to snuggle as close to FHE as hardware can get!
IMHO, the service provider is the last one that should ever be able to see the keys :-). It's them we want to keep sensitive data away from
Keys are injected into the HW with public-key encryption. This requires that the HW have keys that only the HW knows (it's secret key). This key is made by a weak PUF circuit, which is basically a circuit that measures silicon process variation. So the keys are born in the silicon fab, through the natural variability of the silicon fabrication process. I didn't invent this, it is an old idea. Intel SGX uses the same approach.
The intended use case is for remote execution where the user (data owner) pays a service provider to run services on their hardware. It could still work if the user somehow prepares the chip herself and ships it to the service provider to be used on their future data, but most users would not want to bother with that first step.
After skimming through the documentation this seems like a nice solution, but I'm not sure if this is a problem we want to solve.
Consumers are finding out the issue with cloud computing when their heating system can't turn on because Cloudflare is down. A cheaper and more reliable solution is still on-premises computing.
Large social network and content platforms don't have any incentive to keep your data safe because they want to monitor and own everything.
Maybe this is for something like a government running a public service?
I was genuinely asking, what cloud service do you use where trusted computing is essential for the core functionality of that service? What elements of the computational process do you not trust those services to perform for you?
My point about Cloudflare was more about them taking down essential services that could run just as well on-premises like a heating controller.
Who are you protecting data access from in those cases? My suggestion was that it's probably more practical to run those kinds of solutions on a hardware stack you trust; in our basement or in a small box on the wall in your living room.
Besides, the specific extension we're talking about protect registers and computation and not shared memory.
Issue is, unless you can be 100% sure you hardware has not been built with a vulnerability or backdoor, or subject to an evil maid attack....then you can't be sure its trustworthy.
There's no back doors, but there's no integrity checking either, so a Mojo-V voting machine could take an encrypted vote and throw it away and add +1 to the attacker's
favorite candidate.
A computational integrity checking mechanism will appear soon that will add a concise proof to every encrypted Mojo-V value, that will prove to the data owner that their requested computation was faithfully performed. And the mechanism also supports safe disclosures, too.
This should give data owners strong controls over what can be done with their data
The platform owner can manage keys and data contracts in the processor, that should enable them to rotate secrets constantly.
In other hardware there is an OEM secret because the manufacturer is trying to keep users out of "their hardware", in this case we're trying to keep everyone except the data owner out.
This should not (so much) be compared with Fully Homomorphic Encryption (FHE) but with a Trusted Execution Environment (TEE). It is a very elegant and minimal way to implement TEEs, but suffers from the same drawbacks: a data owner has to trust the service provider to publish the public keys of actual properly constructed Mojo-V hardware rather than arbitrary public keys or public keys of maliciously constructed Mojo-V hardware.
[1] https://en.wikipedia.org/wiki/Trusted_execution_environment
I created Mojo-V.
For me, I see Mojo-V more like FHE than a TEE, for three primary reasons: 1) Like FHE, the tech is applied to variables and computation that doesn't touch protected variables is not affected. TEEs protect processes. 2) Like FHE, Mojo-V lacks software, timing, and microarchitectural side channels. TEEs are riddled with side channels. 3) Like FHE, no trust is extended to software because it cannot see the data it is processing. TEEs require that clients trust that the attested software has their best interests in mind.
Public key signing is like SGX, the vendor signs the public to certify that it is from real Mojo-V hardware.
To be clear, it's not a TEE replacement but does address one of the most common use cases of TEEs
You could have the keys signed by a chip maker, which cuts the hosting provider out and reduces the trust surface to the manufacturer only. Unless your adversary is someone sophisticated enough to do surgery on chips.
It’s still not FHE but it’s about as good as you can get otherwise.
> Unless your adversary is someone sophisticated enough to do surgery on chips.
Since the threat assessment is important for deciding the strength of countermeasures, let me just add that this isn't as uncommon as you may believe. A company that I worked for had a decent capability to do this, and they were using it just to investigate the failures of electronic subsystems in their projects. Imagine what a more dedicated entity could achieve. This is why standards like FIPS 140-2/3 level-3/4 are very relevant in a significant number of corporate cases.
Talking about chip surgeries, I wish our distinguished expert Ken Shirrif could throw some light on the process. His work on legacy chips is one of the most noteworthy in the field.
I created Mojo-V
I agree that side channel and physical attacks are crucial to stop. The predecessor to Mojo-V (Agita Labs TrustForge) was red teamed for three months including differential physical measurement attacks, and the system was never penetrated. So where there is a will there is a way!
Mojo-V stops software, inst timing, microarchitectural, and ciphertext side channels. Vendors can stop analog attacks if they choose to, but the reference design, which I am building, is meant to be really simple to integrate into an existing RISC-V core. Adding Mojo-V only requires changes to the Instruction Decoder and the Load-Store Queue, regardless of the complexity of the microarchitecture.
I created Mojo-V
Yes exactly, because it is a privacy tech, the key/control channel tunnels through all software into the Mojo-V trusted H/W.
In the spec, I've been working on new Appendices comparing Mojo-V to TEEs, FHE, CHERI, and other high security tech. Mojo-V is a new thing, so absorbing it will take a while! :-)
I see it as a new design point between TEEs and FHE but much closer to FHE. TEEs are fast but they are not good at establishing trust with untrustworthy service providers, FHE is the ultimate in zero trust as all trust is in the math. Mojo-V eliminates all software, programmer, IT staff, attacker, malware trust with trusted hardware, and it runs near native speed.
And yeah, my mission is to snuggle as close to FHE as hardware can get!
Couldn't the keys be loaded once, in private write-only flash memory, by the user of the chip?
I created Mojo-V.
IMHO, the service provider is the last one that should ever be able to see the keys :-). It's them we want to keep sensitive data away from
Keys are injected into the HW with public-key encryption. This requires that the HW have keys that only the HW knows (it's secret key). This key is made by a weak PUF circuit, which is basically a circuit that measures silicon process variation. So the keys are born in the silicon fab, through the natural variability of the silicon fabrication process. I didn't invent this, it is an old idea. Intel SGX uses the same approach.
The intended use case is for remote execution where the user (data owner) pays a service provider to run services on their hardware. It could still work if the user somehow prepares the chip herself and ships it to the service provider to be used on their future data, but most users would not want to bother with that first step.
After skimming through the documentation this seems like a nice solution, but I'm not sure if this is a problem we want to solve.
Consumers are finding out the issue with cloud computing when their heating system can't turn on because Cloudflare is down. A cheaper and more reliable solution is still on-premises computing.
Large social network and content platforms don't have any incentive to keep your data safe because they want to monitor and own everything.
Maybe this is for something like a government running a public service?
> I'm not sure if this is a problem we want to solve
Who is this we you speak of?
I for one much prefer my cloud services and would love TEE I can control.
> A cheaper and more reliable solution is still on-premises computing.
I assure you that my use of Cloudflare services ($0 in nearly 10 years) is much more reliable and much cheaper than hardware I run.
I was genuinely asking, what cloud service do you use where trusted computing is essential for the core functionality of that service? What elements of the computational process do you not trust those services to perform for you?
My point about Cloudflare was more about them taking down essential services that could run just as well on-premises like a heating controller.
i want good confidential compute for cases where e2ee is impractical, like an email server or immich with server-side ml/processing etc
Who are you protecting data access from in those cases? My suggestion was that it's probably more practical to run those kinds of solutions on a hardware stack you trust; in our basement or in a small box on the wall in your living room.
Besides, the specific extension we're talking about protect registers and computation and not shared memory.
Issue is, unless you can be 100% sure you hardware has not been built with a vulnerability or backdoor, or subject to an evil maid attack....then you can't be sure its trustworthy.
Was it really wise to name this Mojo when Chris Lattner, former Head 9f Engineering at SiFive also called his well funded programming language Mojo?
was it really wise to name both Mojo when Mr.Evil stole it from Austin Powers back in 1999?
Doctor Evil. He didn’t spend six years in evil medical school to be called Mister, thank you very much.
It’s called Mojo-V not just Mojo
So close to Mojave, I feel like they could have done something with that.
This could be:
Great for security - Being able to safely compute secrets is a very difficult problem.
Fucking awful for security - More OEM secret controls and "analytics" that devolve into backdoors after someone yet again post keys online.
I created Mojo-V.
There's no back doors, but there's no integrity checking either, so a Mojo-V voting machine could take an encrypted vote and throw it away and add +1 to the attacker's favorite candidate.
A computational integrity checking mechanism will appear soon that will add a concise proof to every encrypted Mojo-V value, that will prove to the data owner that their requested computation was faithfully performed. And the mechanism also supports safe disclosures, too.
This should give data owners strong controls over what can be done with their data
The platform owner can manage keys and data contracts in the processor, that should enable them to rotate secrets constantly.
In other hardware there is an OEM secret because the manufacturer is trying to keep users out of "their hardware", in this case we're trying to keep everyone except the data owner out.
And the relationship to Mojo programming language is?
RISC-V is inevitable.