Transcripts

Zero Knowledge Probabilistic Proof Systems

Date

Topics

Not available

pencil icon

Transcript by

Bryan Bishop

Historical Papers in Cryptography Seminar Series, Simons Institute

She has received several awards, she is a source of incredibly creative ideas, very effective advisor and metor, and I am always inspired when she speaks. So please welcome her.

There is no better place to give this historical talk on zero knowledge than at Berkeley where there are pre-historical origins of zero knowledge. At the same time it's a difficult talk to give, because this is a very distinguished group here, and therefore I changed the talk from one for a broad audience to one that might be of interest to everyone here. In any case, my talk will look like this. I am going to talk about the pre-history, historical from the perspective of someone who was a graduate student. From the point of view of what it was like at that time, which may be different from today. Then I will talk about the impact this has had on different fields and the consequences. And because I was hoping that this would be interesting to the people in this program, I have included a lot of open problems. I spent my week coming up with these open problems, so I hope they're open. Some of them may not be open, so please correct me. I think a good fraction are probably open. But they are not some of the obvious ones as far as I know. So let's start.

I am going to talk from the point of view of a graduate student at Berkeley. If I am going to reconstruct what it was about in the 1980s and the incredible place where we could do a lot of work with fundamentals of cryptography, it was a combination of wise mentors, one of them my advisors who is sitting right there, and a group of very ambitious peers, one of them sitting right there, and some others, and a receptive young field that was receptive to new ideas and new models and new questions and new answers. Advisors, mentors, peers, receptive young field, you still have to figure out what you should work on. That's a good recipe, but you still have to think what it is you are going to do your masters or PhD on. If I think about my own graduate students and how they go about this, it's tough. They look at conference proceedings and see what everyone else is doing. These days, there's more and more papers every day. I am happy to have been a grad student many years. Looking at proceedings for STOC 79, 80, 81, etc. There are very beautiful interesting results here. Something about negation here, something from decrypted logic, something from Yao on distributed computing, etc. Another thing you could do is take a course, and this is the class, and some of you might recognize this page, this was my first page of lecture notes for CS 271 in 1981. I never knew that you had this policy that you could have 5 minutes of my time anytime you need it. I think we took much more than 5 minutes of your time. This was a course on number theory, and covered great things like GCD, discrete logs, factoring, probabilistic primality testing, primes in NP, diophantine equations, continued factions, legendre symbol, jacobi symbol, etc. This is really classical number theory, but the algorithmic component is being stressed all the time. Some went on to do algorithmic number theory at the time, like Adleman and Anders and so forth.

But what I really want to talk about is that at the end of the class he talks about four cryptography papers, one was Diffie-Hellman, RSA, Rabin, Shamir Break of Merkle-Hellman Knapsack. I couldn't find those lecture notes. That's all happening around these papers. These were from 1977 and I think Merkle-Hellman.. and these are the papers we used to cover. It seems like we're finding out about the power of number theory. There's clearly a hint here that there's something great going on. That there's this incredible use of number theory to solve real problems in a way we didn't know how to do before.

There's one more paper, in particular about mental poker over the telephone (SRA). The problem was suggested by Floyd. He was at Stanford at the time. I went to the original paper, which I had never read until this weekend to tell you the truth. Floyd is asking, once there were two mental chess experts who grew tired of their pastime, so they decided to do mental poker. If you are playing chess, everyone sees the board. If you are playing poker, you don't have the cards in front of you, so you have to do it over the phone. They proposed a protocol. What was the protocol? It showed how to encode cards in such a way that you could deal them. The encoding was you take the card name, but the truth was that I had never played poker. Now I know. You take the 52 cards. Everybody knows them. A player takes the card and takes it to the power a^e mod n. They scramble the deck. Any binding commutative secret encryption function E would do. You could do some other encoding of A as long as it was commutative and binding. They showed that it was possible, and then how to do it. Why is it possible to prove that it's impossible, and yet do it?

The thing is, this was beautiful and clever, but Lipton shows that there's a problem with it. He observed that all quadratic residues remain residues. These cards, if they have some property before you took them to the power e mod n, then they will also have that property which is easy to test, so n was a prime, after you take a^e mod n. So some property of these cards is preserved even after you encrypt them. High cards had one property, low cards had another. So you could still recognize which card is high and which is low. He says, perhaps this is always true, for any encoding with these properties. So great, we have an open problem.

This is the open problem. Can you play mental poker provably hiding all partial information? How do you define partial information? How do you define hiding? What do you mean by provably? The lucky thing is that we don't just charge on, but we try to think what it means to actually solve it. Is this all sufficient for mental poker? Even if you defined partial information and figured out what it means to prove something even when it was shown to be impossible; would this satisfy conditions for mental poker? This is too many questions. These questions came up one at a time. I am going to talk about one feature for starters. What does it mean to hide partial information?

Computationally indistinguishable encryption is where the adversary can't find any two cards for which he can tell apart Enc(m0), Enc(m1) better than (1/2) + negl(k). So if we look at this picture, there's an adversary that looks like Count Dracula, and they are trying to tell which card is which even after encryption. The probability distribution- and you're getting a sample and you have to tell which is which. The encoding function cannot be deterministic otherwise it would be unique. It turns out, why is this the right definition? I want to say that all partial information about a card, whethre it was red or black or high or low, is hidden. You can prove that if your encryption scheme satisfies this, where you can't find a pair that are distinguishable, you can prove that all partial information is hidden. You would prove it by counterpositives. If you could figure out information without seeing the deck, then you could convert that into a scheme where you have two messages where.... I want a way to encode cards so that I can define hiding partial information. You don't want to tell two cards apart. Your encoding has to satisfy this. A deterministic method can't do this. I won't spell it out. Figure it out, hah. The definition suits the problem.

Essentially, if oyu think about it, do we have an encryption scheme at this time? No, we have RSA, merkle-hellman, diffie-hellman, what we need now is, where's my water? What we need is, we need to encrypt a single bit. What does that mean? We need to find some decision problem, and rather than having RSA that takes information to ciphertext, we need a decision problem, we need random instance better than (1/2) + 1/neg(k). We should be able to generate random yes and no instances from which it is hard to distinguish. If we have that, we have it all. We will let random yes instances be encryptions of 0, and random no instances be encryptions of 1. We thought this was a big discovery. How to encrypt a single bit [GM]. If we have a decision problem, we can encode a 0 or 1. But do we have a decision problem? This is the extent of what we know about number theory. There was a eureka moment, for me this was a eureka moment, that is the problem we should use is a quadratic residuosity problem or quadratic residue problem.

What do we know about quadratic residues? When n=pq and y... Y=x^2 mod n. When n is prime, it's easy to tell when there's a solution to this equation.... it's easy, if you tell when, there's a gap here when n is a prime, and when n is composite. There's a decision problem, right? The decision problem that is hard is whether y is a quadratic residue or not. Why is it easy to generate? You can take x and square it, and get residues. At the time, we were look... when n is special, it has some special properties, if you take x^2, a.. it's easy to generate, there is no other way, he doesn't know it, how to tell them apart. So we ewnt to talk with someone in the math department and we asked Lehmer, what would he do? "I am not a gambling man, wouldn't guess unless yi s small (perfect squares)". He invented some primality test versions. How many are there? There's a calculation process. We want something negligble in log n, and that is negligble in log n. Since this is a historical talk, I must say that I remember when I was .. anyone can give a historical talk, I used to tune out, but now I find it fascinating. In any case, let me tell you what I read about in a little bit.

Lehmer was married with this lady and they wrote lots of papers in number theory together. She wasn't a faculty member, she was an independent researcher. During WW2, they let her be a faculty for a few years. They made an exception for her that two family members cannot be in the same department, to combat nepotism, which is weird because I think his father was head of the department at the time anyway. He was blacklisted by McCarthy.

So the question remains, if you give him a log, he wont guess. So I need to know, is it going to be hard for random y? The technical contribution in realizing the 1 bit is this... random self-reducability of quadratic residues. We can transform this algorithm that works for every y for very high probability. This is a random self-reduction. Suppose you want to solve a problem y, worst case, and you want to break this question into random instances, r1 r2 and r3, and suppos eyou ould solve these random instances, you could reconstruct and recombine to find the solution to the original problem. If this exists, then the random instances solution exists, then you could solve any instance. If it's hard to go from F(x) to x, then it's ... so we get a worst-case, if you fix the composite number, quadratic residues are hard, so all of this is for a particular composite number. That's all I am going to tell you about this paper.

What are the impact of these ideas? STOC 1982. This is how to play mental poker with hidden partial information. There is a way to encrypt bits where you hide partial information. More important than that is the concept of computational indistinguishability. This is very relevant. It gives you a new way to define a lot of questions you might be interested in that are related to encryption but are not encryption. We have an adversary, a poly time guy, he is faced with the fact that he will get a sample from one probability distribution or aonther, and he has to tell which is which. Computational indistinguishability says that if you can't tell better than 50/50 then they are effectively safe, effective similarity. This turns out to be quite useful because you have two probability distributions being essentially the same. It has been useful for not just encryption but also how to define pseudorandomness, you can talk about random strings and pseudorandom strings, simultaneity, etc. Pseudorandomness genreation is possible BM82, Y82, GGM84, etc. Any application like choosing a subset, versus placebo, because again you can't tell the difference, so you might as well use pseudorandom function.

Computationally indistinguishable program obfuscation. We are still using tihs definition. We are using it for program obfuscation. The definition for program obfuscation is that, you take two programs and you say that you scramble them so that you can't recognize the original program, this is that an adversary who got an obfuscation of the first or second program cannot tell them apart. Strange definition but extremely strong. It was shown by a construction. Many uses of this have been applied to programs.

What's another impact besides a paradigm for computational indistinguishability? Worst case to average via random self-reducability. We had quadratic residue versus non-residue. As soon as you think about it, you can show that for RSA, discrete log, elliptic log are random self-reducible. BLR actually showing lots of problems that have this random self-reudcability problem, like checking and self-correcting. The idea here is that you want to check a program is correct, and you can break the questions out, to sub questions about random questions of the same type, and ask the program on it, as long as the program is correct for some reasonable fraction of the time, you can self-correct and get the answer correct. It's a useful property to have. Then there was a famous result from Lipton about the permanent being self-reducible. This was a stepping stone to characterizing the power of interactive proofs. In this crypto group, he showed that for lattice problems you can have worst case average reduction (Ajtai 89). Worst case lattice, no parameters, no n, you can map it to an average case lattice.

This quadratic residue problem turned out to be really versatile. For encryption you can get this semantically secure. There's IBE [cocks] came up around the same time as bilinear maps. You can show circular security (BR) and leakage resilience (BR). It's been useful for protocols like PIR [KO] for homomorphic properties of this squaring function can be used for private information retrieval. It's an example of interactive and non-interactive zero knowledge by GMR and BFM. It's a really good problem. It's still a fascinating question, whether you can do fully homomorphic encryption based on the quadratic residue problem? Maybe it is, but I won't say it's an impossibility.

This is not the end of my talk. For every section I have open problems. Open problems 1. Show that io-obfuscation implies hiding partial information for executions and inputs you can find. Change io-obfuscation definition to hold only for pairs of programs you can find?

If you have a scheme where you can't distinguish between encryptions for 0 and 1, then you have a scheme for hiding partial information. Can you show that IO obfuscation has more, implies that you can hide partial information for executions and inputs you can find? I am asking for a stronger study in this direction. There might be other implications for the privacy of the program underlying. The second question is what if you change IO obfuscation definition to hold only for pairs of programs you can find? The idea is that you can't find pairs of messages you can distinguish between. I don't know if this changes anything.

How about showing worst vs average case for quadratic residue (QR) averaging over the composite numbers n? or any number theory problem for that matter. Can you reduce QR mod n to QR mod n prime for any n prime? We could just assume there's a non-neglible fraction of composite numbers for which it is hard. Just pick enough of them so that surely one of the hard ones would fall in and do some XOR. The XOR theorem was not very efficient. Can you do it universally hard quadratic residues without knowing the fraction, which is harder than the average as it is in the worst case?

I was advertised as giving a talk about zero knowledge. So what about zero knowledge proofs? Where is zero knowledge?

This mental poker protocol that we had, the players at the end, they play the protocol which I didn't describe, they want to prove to each other at the end that they played the protocol. They release all of the composite numbers they used to encode the cards. You couldn't tell which card was which because you didn't have the factorization. Can you prove that you followed the protocol without releasing the factorization? Can you prove that a quadratic residue is a quadratic residue without releasing a square root of it? One way to prove it is to give the value, but can you do it without releasing it? Can you prove that a number is a aquadratic residue without revealing nothing except that?

Instead, it turns out that it's not an easy question from many perspectives. We need some way to measure how much knowledge is released during interaction. If we want to talk about hiding of knowledge, whatever that means, we need to define it. We also need a new notion of a proof. What is a proof? You need some new notion of a proof. I went to MIT at this point in the 80s. This was a crypto mecca. So we went to the east coast. And that's where this work on zero knowledge proofs was done.

There was the pi theorem the pythagoreas theorem, etc. We forget that someone had to actually read the proofs and verify them. We want to make the verifier an explicit presence in this proof system. There is going to be a prover and a verifier. We are interested in the kinds of proofs that are easy to check. It might take a long time and a lot of resources to find the proof. But the verifier has only polynomial time to verify that the proof is correct. An example is n is a product of 2 primes. That's the claim. How do you show it? You can give the factorization. Just give a p and a q such that n=pq. I can check. I can multiply. And that takes polynomial time. After we do this, I know that n is a product of two primes, but I also know the prime factorization. And remember we wanted to be able to do that without releasing the prime factorization.

There's another way to do this. The idea is to prove that a proof exists. She wont give it, but she proves it exists that n is a product of two primes. We need two new ingredients. We need randomness and non-trivial interaction. All the interactive protocols are using randomness and using interaction. They are tossing coins and talking back and forth. Mathematicians get upset about using the word proof here. Well, this is an interactive proof. The verifier can ask questions back and forth to the prover. This is the only protocol that I am going to remind people about because I am going to use it for properties of zero knowlegde in general.

Suppose all that the prover wants to do is that claim that y = x^2 mod N is solvable without revealing the value of x. The prover picks a random r, she wants to pick that y is equal to x^2 for some value x. She considers z = r^2 and z * y is (r * x)^2. If I gave you solutions to both, that is r and x you would be convinced that the claim is true but also know x. But I want to prove to you that I could give it to you, if I wanted to. Instead what I will do is give a solution to either r, a random number unrelated to y, or another random number unrelated to... so you choose which equation you want a solution to. The verifier knows that the prover might be trying to cheat. What should he ask for? The best I can do is flip a coin. The prover can't predict which one would be asked. So the verifier flips a coin and picks a question to ask. The prover can maybe solve one of the equations maybe,...... If the claim is true, then the prover can alays convince the verifier. You can prove correct statements, you can be caught trying to prove incorrect statements with probability half, that's probably not good enough, so we repeat this 100 times, and the probability of catching them can be very very high. You can make this be 100 tries or 1000 tries or whatever would satisfy you. The prover knows exactly the square root of that one equation.

So what makes this possible? The statement to be proven, each interaction has exactly two parts, either part on their own gives you nothing, seeing both applies 100% correctness, and you're allowing the verifier to choose which parts he wants to see. In this protocol, ... what would be another ingredient? Proving that a number is a non-residue. How would I show that there is no square root? How do you show that the object doesn't exist? How do you show that there are no solutions to it? It's very simple. The verifier flips a coin, he starts, he chooses a random number r, and he chooses r squared time sthe original question, or r squared itself, and he sends this over and says hey you the prover are so powerful, can you tell me which coin I flipped? If he's lying, and y was a quadratic residue, he couldn't distinguish between one residue and another. If it's a non-residue, the prover can detect. You can prove correct claims, you can't prove incorrect claims with good probability. The prover needs to know a bit more, the prover needs to distinguish quadratic from non-quadratic residues. The prover needs some extra power. It's an interesting open question. Maybe there's another protocol where you need to do more than that? Is it always the case that you need to have more? Secondly, the verifier must hide his coin tosses.

[GoldwasserMicaliRackoff85] Zero knowledge interactive proofs for L

You have a prover, a verifier that asks questions, a prover that provides answers. It's a decision problem. The equation is solvable or not solvable. If x is in the language L, if the number is a quadratic residue, and if not then the verifier should reject with good probability. If the prover is lying, no matter how sophisticated the prover is, it cannot convince the verifier. Some people had objections about this because proofs are supposed to be written down in a book. What about zero knowledge? What about knowledge?

It's obvious that in that protocol you don't find what the square root is. We're claiming that you find out nothing. We have to define what it is that you find out. That's too hard, defining knowledge. Instead we have to talk about what is zero knowledge. How are we going to define that? What is that we want? For any true statement, when the prover is not trying to cheat, it's protected. If there is a proof, then the verifier can find it for himself. The only interest here is when the verifier is bounded. For every adversary whatever he can compute he could have computed before the interaction. The interaction gives you nothing new computationally. Whatever you can compute before is the same as what you can compute after. It's hard to work with this definition. We need a more formal definition. The definition is captured in the title, which is that if you can simulate then you might as well go home. If you go to dinner with someone, everything they say is boring, you might as well stay home. We want to say that a protocol is zero knowledge if for every true statement, whether you are sitting here and talking to the prover, and there's the interaction, the prover claims that the theorem or claim is true, they go back and forth, and then the adversary records all of this cross-talk, and whatever this is, the adversary could just sit at home and watch netflix. All the adversary needs is to know whether the claim is true, he should just know the claim is true or not true he could come up with a fake simulated as if he was talking to the prover. The distribution of the real view and simulated view should be such that any PPT algorithm can't tell them apart. The adversary itself is part of the interaction. The adversary is asking questions in really tricky ways. This has to be the case for whatever strategy the adversary employs, no matter what questions are asked.

Now someone can give you a protocol and show you a simulation of what went in the protocol, you know the protocol is zero knowledge, which means that there's no way that participating in the protocol could be useful to you beyond the information that the claim is true, nothing but validity. I hid a lot of details here. I won't talk about them.

There are flavors of zero knowledge. Perfectly identical distributions is PZK. Statistically close distributions is SZK. Computationally indistinguishable distributions is CZK. We were convinced that zero knowledge had some interest. We decided that even though the quadratic residue and quadratic non-residue, they were identical distributions, maybe there would be other protocols where you can't show identical distributions but you could show distributions that look identical because you're using encryption. In polynomial time you can't tell them apart. The first protocol we came up with required some of these assumptions.

We thought this had potential. Not everyone else agreed. This was 1983-1985 (The Resistance). We were continuously submitting this paper. Not the same paper, always a different title. There was "proofs with untrusted oracles".

"Proofs with untrusted oracles" abstract: In this paper we show how to obtain proofs from an untrusted oracle. In some cases, it appears that these proofs cannot be obtained by NP-Turing machines. We apply computing with untrusted oracles to proof separations: a new way to distingusih the relative computational complexity of functions. Under the intractability assumptions of factoring, we prove a strict hierarchy among classic number theoretic functions. Our techniques allow us to handle (in an elegant model) the communication between the distrustful users A and B with uneven computing power (either way): A wants to release some piece of information to B, but nothing more than strictly necessary; B wants a proof tha tthe information is correct.

There was a notion of proof separation for separating problems based on one being an oracle or not. This paper was not acepted. Next time we called it "Interactive and Minimal Computations" with Silvio Micali.

"Interactive and minimal computations" abstract: We introduce the class INP (Interactive Non-deterministic Polynomial Time) to be applied to decision problems. This will allow us to present INP as a natural extension of NP. We will also study INP computations in a more general context, as needed for several applications. Pi will denote a decision problem.

That didn't convince them either. There was a third paper, "The information content of proof systems". The definitions are getting better. The protocol is changing a little bit. The next submission was wit hCharles Rackoff, "The knowledge complexity of interactive proofs". That got rejected too. The fifth time it got accepted, "The knowledge complexity of interactive proof systems". Why am I telling you this? It's a funny story. Today, I would never submit a paper 5 times. I think it's unfortunate, because it worked.

In 1985 there was anohter paper already, Arthur-Merlin Games [BaM85] where they introduce some other proof system, there's a prover there's a verifier, Merlin and Arthur. Is IP more powerful than AM? Is coin privacy necessary? Arthur-Merlin games, there's a prover Merlin, a verifier Arthur, they are still interacting, they would still like to know if a statement is true. All that Arthur can do is send coins, he can't hide his coins. One of our examples it was important to hide coin tosses. This guy, just tells his coins. Is this more powerful than AM? Is coin privacy necessary? There was a theorem that showed that public coins and private coins (for power) are equivalent. The main observation is that what Merlin needs to do is prove that if you look at all of the set of non squares, and the set of r squared times y which you are claiming is a non square, that the set contains only the non-quadratic residues. If it is a square, then it contains both the residues and non-residues. That is the crux of the idea. YOu show that there are a lot of coins in the private cas ethat would have led to the acceptance. It's all about the size of sets. Public coins and private coins are just as powerful. This protocol will leak the size of the set and other things. That's something that will be talked about later when we talk about open problems.

What is the impact of this stuff on cryptography and complexity theory? In terms of crypto, there's lots of use of zero knowledge. This is ultimate protection from identity theft [FS86]. There's a password scheme where your password is a proof, and the way that you convince the computer that you know the proof is by zero knowledge proofs. The computer doesn't learn how to convince someone else. It's sort of deniable, it's a good idea. The major result right after was from [GMW86, BGGHKMR86]. If there's a one-way function, then every language, they showed every langauge in NP, that every language actually can be proved in zero knowledge. For now, since every NP statement can be proved in zero knowledge, we are talking a different ball game. You can get an automatic transformation of secure protocols where adversaries that are benign, to adversaries that are malicious. They don't compromise anything but convincing you that the statement is true. Zero knowledge becomes a benchmark for the study of cryptogrpahic protocols, like for checking whether cryptographic protocols are safe to compose is it still zero knowledge or does the composition break this? Can you use adversary program within proof of security? Secure composition.

There has been lots of extensions of zero knowledge proofs in the 90s. Zero knowledge proofs of knowledge [GMR, BG92]. Zero knowledge for promise problems [GG, SahaiV]. Non-interactive non-malleable concurrent resettable exact non-black-box simulation [many]. Quantum zero knowledge proofs [Watrous]. I'll talk about promise problems in a little bit. There's non-interactive as I mentioned already. There's non-malleable like with Sahai again. Concurrent is Sahai. There are too many to mention here. There's also an extension of this to quantum zero knowledge proofs which poses an interesting problem. This is all in the 90s.

There are some recent uses of zero knowledge, very different from what we anticipated. There was a paper by Barak and some physicist, which I wont mention because I forget their name, they take a zero knowledge protocol similar to the number theoretical results, and show how you can apply these principles to nuclear disarmament. Then there's a paper by Naor that shows how to apply zero knowledge to forensics. Someone who comes to claim that he didn't commit a crime, so what about submitting it under zero knowledge. There's a lot of work with zerocoin and zerocash, which is bitcoin with privacy. You can take the bitcoin protocol and add the property that you don't have to release everything. There are many works that have to do with computation delegation, like Kalai, Rothblum, Tromer, etc. Why computation delegation? We'll talk about it.

I am not going to discuss all of this. I will discuss three things about this development. These are probably less wlel known. How hard are lattice problems? Interactive proofs had an impact on this. A lot of you know that these days the rave is not number theory but these post-quantum hard problems where you can get fully homomorphic encryption and obfuscation possibly which are based on hard problems on lattices. Best known algorithms run in time 2^n like [AKS01]. You can approximate. Schnorr's algorithm... you can get them to run in polynomial time. You can get an approximation of a short vector in a lattice, like LLL82, Schnorr85. We know how to prove, and the best result right now I think is Khot04, this problem is NP-hard to approximate for sub-polynomial approximation factors. Why am I talking about this? Because it turns out that when we use for fully homomorphic encryption or obfuscation, we use a lattice problem, we make some assumptions about how hard it is. We assume it's hard to solve in polynomial time, sometimes with subpolynomial factors of approximation. So how reasonable is that? Is it true? Is it false? Can we say anything about it? In 89 when people were starting to do this, there was a paper by Akai ... well just a little dyslexia. The first question that comes to mind, does this mean that we can base cryptography on NP-hardness? For certain approximation factors it's NP-hard to solve. And we have a crypto system that can be based on this problem with a different approximation factor. Unlikely, but what is the evidence? Is this the case? Can we base cryptography on NP-hardness? Well, it seems unlikely, okay, but what's the evidence? One piece of evidence comes from interactive proofs. We can show evidence by constructing an interactive proof or Arthur-Merlin that- we show that it's not NP complete. Evidence, we have some problems of approximation, how can we show that it's not NP complete? But what kind of evidence? we can show that if such a problem is in NP to set co-AM, short proofs and complements have short proofs, then it's unlikely that it's NP-complete. Would be great if it's co-AP, the next best is co-AM. The complement of the problem has a short Arthur-Merlin interaction. We define a problem called GapSVP, you give a lattice specified by basis of lattice, essentially there exists some vector which is short, shorter than d and all vectors in this lattice are big. The promise is that it's either really small or bigger than g by d. Here's a problem, we don't know how to solve it. We show that this problem is in this class, all vectors in this lattice are large. It was easier to show something is small, if you don't care about zero knowledge. But how do you show that there's no short vector in this lattice? So you have to do something different. It's a bit different from all the protocols I've shown you. What it tells us is that if we look at this line and we look at this lattice problems, it's hard to solve exactly, it's easy to solve with a large approximation factor, the crypto world uses it somewhere in the middle, this approximation factor has some evidence tha tit's not NP complete. The crypto world uses larger factors than this. We proved it in some sense, the factor moves, but we show a co-NP result. All of this is so to understand open problems later.

What's another result with an open problem later? There's a beautiful development by [SV03]. They worked on this question of statistical zero knowledge. Just look at the one where you can get the simulated view really close to the real view. There's a notion of completeness. There's no mention of zero knowledge or interaction. It's quite interesting. It means that if you want to show properties of this class, you just have to show properties of this problem. And finally, before we get to open problems, there's some work in quantum zero knowledge proof of knowledge. What's the problem? One of the nice things about zero knowledge is that you can talk about what it means for a machine to know something. It's an interesting question.

Prover knows witness s: if prover successfully proves, then there is an extractor that extracts s, by running prover repeatedly on same initial state. Quantum case: rewinding = state copying, impossible. [unrhuh12] Strict quantum commitments => quantum proof of knowledge for all of NP.

The definition says that if a prover knows something, you can extract from them that something by repeatedly running the same protocol over and over again, each time writing some different question to the prover. What makes the whole thing work is that you can restart exactly from the state you want. Everyone is quantum here, the prover, the extractor, and so forth. We don't know how to go back to the state you were at, because once you copy the state it changes and so on. It's much harder to do zero knowledge in the quantum world. There's some work on this. I encourage anyone that wants to learn about quantum computation to read this paper, because there's something there, but the requirements- maybe they exist, maybe they don't, in terms of using one of the crypto systems people are working on.

One more impact is this simulation definition becomes in a way similar to the computation indistinguishability, it's useful in other contexts, essentially whenever you have a situation where you are trying to define security. It's secure if the guy couldn't do any better in a benign ideal situation. Here you could say there's a protocol, there are some adversaries that are getting a real view of messages, how secure is this with all the messages and coins and randomness? I don't know, but I can show that without even seeing the messages, even in an ideal world where I had access to a trusted third party, I could simulate the view. I talk about mental poker, I asked how would we define the solution for mental poker is right. Here's how. You would compare what happens in the protocol to what happens in a real game with a deck of cards and strategy. You can't win the game any better in the crypto world than in the real world. This definition would allow you to prove that, by simulating what happens in the real world evne with a deck of cards and so forth.

There's this paradigm and it's very useful, it has been used for obfuscation. You can define obfuscation by iusing this "ideal real paradigm". The best obfuscation that I could hope for is if even though I saw the code and could analyze, it's no different than just having black box access to a program that computes the code in the following formal sense: Ic ould simulate whatever I could gain from looking at the code, by this input/output axis. [BGIKRSV] has a theorem that says it's impossible to achieve generally. It's a requirement to know your coauthors names. They show that this is actually impossible. A few years later, something else was done with IO obfuscation.

Can you come up with a private to public coin transformation? GS85. VVadhan05 shows another transformation that does preserve zero knowledge, but ZK-flavor changes, can you presreve it? Prover power: can you preserve it? Beautiful paper in 2005 where he shows another transformation that preserves statistical to computational but it would be nice to preserve the flavor like statistical before statistical now.

How powerful should this prover be? How powerful do you have to be to prove something to someone else? Do you need more? An analog of a decision problem; to prove membership, how much harder is it than to decide the language itself? For np hard problems this is equivalent. But this is an interesting question in general. This is 89, I am sure this time. I don't remember, okay. If under some sort of complexity assumption there is some language in which search does not reduce to decision. I don't know if that's in the proof sense. But it is a good question to look at.

How about finding complete problems for perfect zero knowledge or computational zero knowledge? Next, can we find some natural complete problems for statistical ZK, like combinatorial or number theoretic problems? We have ones that talk about distributions. Can you prove that statistical and perfect zeor knowledge are the same? There are some partial results, like Sahai-Vadan03.

Another question is that ... this is not a novel question, but VBB obfusccation can be achieved where? unconditional transformation? Can VBB obfuscation serve as a benchmark in the same way that zero knowledge proofs can be used. If io-obfuscation exists then Nash is PDD-hard [BPR05]. Then there's the quantum question. Can you solve the problem? Find appropriate quantum resistant commitments, new rewinding's. Improve approximate factor for which SVPg is in NP from co-AM.

Right, not PDD-Hard.

There has been impact on complexity theory as well. I will do this as a marathon. What about the impact on complexity theory? I think that the point is that zero knowledge showed that the correctness of a statement can be completely decoupled from the knowledge of the proof. The fact that the statement is correct, believing it, and knowing it, are not the same. If a proof of a theorem doesn't require knowing the proof, then what does that mean? You can ask a lot of questions about this. This is fascinating.

Classically an NP prove where you send a proof and someone can check it, we can do it in NP. If an NP question is an equation, you would like to show that there is a solution. Classically, you show the proof you verify it. We want to show no solutions to the equations. Or propositional or something, like PSPACe and co-NP. We don't know how to do this with traditional proofs, but what about with interactive proofs? The answer is yes see [FortnowKarloffLundNissan89, Shamir89]. You can do co-NP, you can do shard P, and all problems in PSPACE.

Are there other ways to define probabilistic proof systems? What about something without interaction or without coins? One way to do this is the work of [BenorGoldwasserKilianWigderson88]. One prover was great, let's add another prover. This should bother you, why would you need more than one prover? In any case, why is two better than one? Because you can separate them. The verifier can interrogate one in one session, and interrogate the other one in another session, and they can't see each other when interrogating, and they have to assume that they will be consistent with each other. This is an incredibly powerful tool because it keeps the prover in tact, he cannot really lie too much because he will be caught. This mechanism of proof where there's two suspects of the crime and you interrogate them, this was for the purpose of zero knowledge because one can show you can get perfect zero knowledge in this case. In other words, what happens here is that if you look at these conversations you can simulate these identically. More than that, they have more power, you can do these classes, but you can also prove membersihp from NP all the way up to NEXPTime [BabaiFortnowLund90]. They have no cryptographic assumptions.

The third question you can ask is, you can prove more, you can change the types of proofs, maybe these proofs allow you to even prove a statement you could have proved with classical proofs, but better? Not just in zero knowledge, but faster. That is the case. That's the beautiful work of [BFLS, FGLSS, AS, ALMSS]. It's equivalent to MIP, two provers. If you look at how much information has to flow to the provers to the verifier, you can see that the check statement that would require creating a long proof, you can check them with high probability by checking much less. I'll give you an example. V wants to know if <51% or >99% of equations are solvable. There are six equations on this slide. Either all of them can be solved, or they can be solved less than 51% of the time. This is a hard problem. But the prover is all powerful, he just has to get a solution to 99%, and then you know that the 99% can be solved. Here's another way to convince you of that. He can show you a lot less. He doesn't have to give all the x values to you. Instead, the prover will do the following. The verifier chooses a random equation, so he chooses one of the equations at random. He goes to the first prover, and says give me a solution to this equatoin, and that solution that you claim satisfies for 99%, the prover has to give a value for each of these. Then he goes to the other prover and asks hey what is the value of ... and then if in fact there is a solution that satisfies 99% of the equations, you might as well give the right answer. If there's only, satisfied 51%, then it is possible to prove that you can catch the second prover. His best strategy is to not give you the right value, .. so how many bits went back and forth? 3 bits. The first gave gave the value of x1, x4, x7, and the second guy gave just one bit. So I guess 4 bits. You can show that there is a constant probability of error.

Present and future of delegating computation. All of this 80s and 90s stuff. What about the present and the future? I think it's clear that even though you can disarm nuclear warheads and do forensics, probably the most likely usage is delegating computation. There's all this migration of data and computation into the cloud. Can we do this without relinquishing control? The cloud now has my data, my computation, can I do this without giving up all control? Medical is like research progress vs patient privacy, surveillance vs right to be left alone, finance has risk analysis vs market competition, economic growth consumer targeting vs fair pricing, can mathematics and technology enable us to have the best of both worlds?

The magic of cryptography says that you can go a long way to reconcile these things. We have shown a verification of a proof does not require seeing the proof. Computation on data does not require seeing the data.

Q: Would you put money on the statistical zero knowledge being the same question?

A: I think so. I'd say $100. Czech is my favorite coin, it has been doing quite well.

Transcripts

Community-maintained archive to unlocking knowledge from technical bitcoin transcripts

Transcripts

Explore all Products

ChatBTC imageBitcoin searchBitcoin TLDRSaving SatoshiBitcoin Transcripts Review
Built with 🧡 by the Bitcoin Dev Project
View our public visitor count
We'd love to hear your feedback on this project?Give Feedback