Simplified Verifiable Re-encryption Mix-nets

Shinsuke Tamura, Shuji Taniguchi

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Simplified Verifiable Re-encryption Mix-nets

Shinsuke Tamura1,, Shuji Taniguchi1

1Graduate School of Engineering, University of Fukui, Fukui, Japan

Abstract

Under the assumption that numbers of data that are encrypted and decrypted are sufficiently large and final decryption results of individual data can be publicly disclosed, a simplified mechanism for implementing re-encryption type verifiable mix-nets is proposed. Different from already proposed mechanisms, in which mix-servers prove their honest encryptions while concealing their encryption parameters, mix-servers in the proposed scheme simply disclose their aggregate encryption parameter values. As a consequence anyone can verify encryption results without interacting with mix-servers. Also, its primary verification procedures examine only aggregate behavior of mix-servers, in other words, it does not examine correct encryptions of individual data. Therefore computation volumes required for mix-servers to prove their correct behaviors are reduced substantially. In addition, the proposed scheme can cope with various attacks from malicious entities more effectively than optimistic verifiable mix-nets that also examine only aggregate behaviors of mix-nets.

At a glance: Figures

Cite this article:

  • Tamura, Shinsuke, and Shuji Taniguchi. "Simplified Verifiable Re-encryption Mix-nets." Information Security and Computer Fraud 1.1 (2013): 1-7.
  • Tamura, S. , & Taniguchi, S. (2013). Simplified Verifiable Re-encryption Mix-nets. Information Security and Computer Fraud, 1(1), 1-7.
  • Tamura, Shinsuke, and Shuji Taniguchi. "Simplified Verifiable Re-encryption Mix-nets." Information Security and Computer Fraud 1, no. 1 (2013): 1-7.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

Verifiable mix-net M consists of multiple mutually independent mix-servers that shuffle, encrypt and decrypt given data so that no one can know correspondences between inputs and outputs of M while ensuring the honest handlings of the data, and it plays important roles in e-voting systems, i.e. when M shuffles, encrypts and decrypts votes put in it, voters can conceal their votes from others so that they can preserve their privacies and protect themselves from coercers. In addition participants of elections can convince themselves that individual votes are correctly counted [13, 14, 15]. Same kind of situations occur also in other important applications e.g. in e-poll systems, and various schemes had been proposed to implement verifiable mix-nets [5-10,12]. However, to convince participants honest behaviors of mix-nets while preserving privacies of data owners many existing mechanisms rely on zero knowledge proofs (ZKPs) [2, 3] that require numbers of interactions between mix-nets and verifiers so that mix-servers can conceal their encryption parameters. Also in cases where participants cannot completely trust verification results made by other entities, mix-nets must prove their honesties to participants individually. In addition, in some schemes behaviors of mix-servers about honest handlings of data must be examined individually, and they are not efficient enough to handle large volume of data.

To make verifiable mix-nets practical enough for even large-scale applications, this paper proposes a simplified mechanism for ensuring honest behaviors of mix-nets under the assumption that numbers of data are sufficiently large and final decryption results of individual data can be disclosed (this assumption is satisfied in e-voting systems, i.e. in e-voting systems individual votes are finally disclosed so that participants can know election winners). In the proposed mechanism, to prove honest behaviors of mix-nets, individual mix-servers simply disclose their encryption parameters in their aggregated forms. Also, the mechanism examines only aggregated behaviors of mix-servers; in other words, it does not examine individual data. Therefore, complicated verification processes can be removed, and more importantly, because mix-servers do not need to participate in verification processes anyone can easily verify behaviors of mix-nets while examining them by itself without interacting with mix-nets. Then the mechanism works efficiently even in environments where participants cannot trust verification results made by other entities. In addition, when compared with optimistic verifiable mix-nets that also examine only aggregated behaviors of mix-servers [7], the proposed mechanism handles various attacks from malicious entities more effectively.

2. Re-encryption Mix-nets

Re-encryption type mix-net M consists of multiple mutually independent mix-servers S1, S2, ---, SQ that are arrayed in re-encryption and decryption stages as shown in Figure 1, so that entities P1, P2, ---, PN that put their data D1, D2, ---, DN can conceal correspondences between the data and outputs of M from anyone including even P1, P2, ---, PN themselves. In the figure, {yq = gxqmod p, xq} is mix-server Sq’s encryption and decryption key pair of an ElGamal encryption function, where integers g and p are publicly known and common to all data, and encryption key yq is publicly disclosed but decryption key xq is the secret of Sq. y1y2---yQ= gx1+x2+ --- +xQmod p = gx*mod p = y* is a common public encryption key.

Firstly in the re-encryption stage, each Pj encrypts its data Dj by publicly known common encryption key y* while using its secret integer aj, i.e. Dj is encrypted to {gajmod p, Djy*ajmod p}. Then, mix-servers S1, S2, ---, SQ repeatedly re-encrypt {gajmod p, Djy*ajmod p} to {gaj+k1j+k2j+ --- +kQjmod p = gkj*(Q)mod p, Djy*aj+k1j+k2j+ --- +kQjmod p = Djy*kj*(Q)mod p}, while generating their secret integers k1j, k2j, ---, kQj. Namely, S1 calculates {gaj+k1jmod p = gkj*(1)mod p, Djy*aj+k1jmod p = Djy*kj*(1)mod p} based on {gajmod p, Djy*ajmod p} received from Pj and forwards it to S2 while shuffling it with other re-encryption results, S2 calculates {gaj+k1j+k2jmod p = gkj*(2)mod p, Djy*aj+k1j+k2jmod p = Djy*kj*(2)mod p} from {gkj*(1)mod p, Djy*kj*(1)mod p} and forwards it to S3 of course while shuffling it with other results, and finally SQ calculates {gkj*(Q)mod p, Djy*kj*(Q)mod p} to put the result in the 1st bulletin board (BB). Where, BB is a memory of which contents can be read by anyone at any time. In the remainder, notation (mod p) is omitted.

In the decryption stage, mix-servers SQ, SQ-1, ---, S1 simply decrypts the re-encrypted form {gkj*(Q), Djy*kj*(Q)} to Dj by their secret decryption keys xQ, xQ-1, ---, x1, i.e. from {gkj*(Q), Djy*kj*(Q)}, SQ calculates {gkj*(Q), Djy*kj*(Q)/g(kj*(Q))(xQ)} = {gkj*(Q), Djg(x1+ --- +xQ)kj*(Q)/g(kj*(Q))(xQ)} = {gkj*(Q), Djg{x1+ --- +x(Q-1)}kj*(Q)} by using its secret decryption key xQ, SQ-1 that knows its secret key xQ-1 calculates {gkj*(Q), Djg{x1+ --- +x(Q-1)}kj*(Q)/g(kj*(Q))(x(Q-1))} = {gkj*(Q), Djg{x1+ --- +x(Q-2)}kj*(Q)}, and SQ-2, ---, S1 repeat the same operations until S1 calculates Djg(x1)(kj*(Q))/g(kj*(Q))(x1) = Dj to put the result in the 2nd BB.

Here, although it is assumed that individual decryption results are publicly disclosed in the 2nd BB, no one except Pj can know Dj that Pj owns unless all mix-servers conspire because anyone does not know all decryption keys x1, x2, ---, xQ, in addition, no one including Pj itself can know the correspondence between Dj owned by Pj and {gkj*(Q), Djy*kj*(Q)} in the 1st BB or decryption result Dj in the 2nd BB either except the case where Dj is unique to Pj because each Sq in the re-encryption stage shuffles {gkj*(q), Djy*kj*(q)} with other re-encryption results.

It must be noted that mix-servers in the decryption stage do not shuffle their decryption results, i.e. even if decryption results are shuffled anyone can identify the correspondence between input {gkj*(Q), Djg(x1+ --- +xq)kj*(Q)} and output {gkj*(Q), Djg{x1+ --- +x(q-1)}kj*(Q)} of Sq by tracing gkj*(Q) included in them. Also, although mix-servers in Figure 1 are arrayed as SQ, SQ-1, ---, S1 in the decryption stage, they can be arrayed in an arbitrary order because ElGamal encryption functions are commutative. Moreover, S1, S2, ---, SQ in the re-encryption stage can be replaced with any entities because anyone knows common encryption key y*.

3. An Approach to Verifying Honest Behaviors of Mix-servers

A problem in re-encryption mix-nets is that because no one knows all decryption keys and each integer kqj is a secret of mix-server Sq no single entity can notice even if mix-servers carry out their re-encryptions or decryptions dishonestly, despite that each Sq in the re-encryption or the decryption stage can easily forge consistent re-encryption or decryption forms arbitrarily (encryption keys y1, y2, ---, yQ and y* are publicly known). But fortunately, this problem can be solved as follows [5-10][5]. Namely, if Sq had honestly calculated {gkj*(q), Djy*kj*(q)} from {gkj*(q-1), Djy*kj*(q-1)}, α = gkj*(q)/gkj*(q-1) = gkqj and β = Djy*kj*(q)/(Djy*kj*(q-1)) = y*kqj must be equal to g and y* to the power of same Sq’s secret integer kqj (in the remainder, pair {α, β} with this property is called DDH pair). In other words, verifier V that verifies honest behaviors of mix-servers can confirm that Sq had honestly calculated re-encrypted form {gkj*(q), Djy*kj*(q)} by examining whether Sq knows kqj that satisfies relations α= gkqj and β = y*kqj or not.

Usually ZKP schemes are used to enable Sq to convince verifier V it knows kqj that makes {α, β} a DDH pair without disclosing kqj itself, but ZKPs require numbers of challenges and responses between V and mix-servers which degrade performances of the mix-net. In addition, Sq must prove its correct operations to participants individually if the participants cannot trust V. To simplify verification procedures and remove necessity that Sq must prove its honesty to participants individually, each mix-server Sq in the proposed mechanism simply discloses its secrets kq1, kq2, ---, kqN in their aggregated form.

Firstly, it must be noted that if {gkqj, y*kqj} and {gkqh, y*kqh} are DDH pairs their product {gkqj+kqh, y*kqj+kqh} is also a DDH pair. Therefore product of all inputs and that of all outputs of mix-server Sq must constitute a DDH pair when Sq is honest. Then, as in optimistic verifiable mix-nets [7], provided that κq = kq1+kq2+ --- +kqN, the proposed mechanism calculates the product of all inputs of Sq, i.e. {gk1*(q-1)+k2*(q-1)+ --- +kN*(q-1), D1D2--- DNy*k1*(q-1)+k2*(q-1)+ --- +kN*(q-1)}, and that of all outputs of Sq, i.e. {gk1*(q)+k2*(q)+ --- +kN*(q), D1D2--- DNy*k1*(q)+k2*(q)+ --- +kN*(q)}, to examine whether α = gk1*(q)+k2*(q)+ --- +kN*(q)/gk1*(q-1)+k2*(q-1)+ --- +kN*(q-1) = gκq and β = D1D2--- DNy*k1*(q)+k2*(q)+ --- +kN*(q)/D1D2--- DNy*k1*(q-1)+k2*(q-1)+ --- +kN*(q-1) = y*κq constitute a DDH pair or not, and when Sq is determined as dishonest, it is forced to re-encrypt its inputs again.

An important thing is although integers kq1, kq2, ---, kqN are secrets of Sq, Sq can disclose their sum κq if N (the number of data) is sufficiently large. Namely, because Sq assigns different secret values to kq1, kq2, ---, kqN, it is computationally infeasible for entities other than Sq to know kqj from κq. Then, each mix-server Sq in the proposed mechanism simply discloses sum κq when it had re-encrypted all inputs. As a consequence, without the participation of Sq, anyone can convince itself that Sq is honest by calculating gκq and y*κq based on κq that Sq had disclosed and by examining relations α = gκq and β = y*κq. Here, the assumption that N is sufficiently large is essential, i.e. if N is small entities can easily identify correspondences between input {gkj*(q-1), Djy*kj*(q-1)} and output {gkj*(q), Djy*kj*(q)}, in other words, can know kq1, kq2, ---, kqN by calculating Y(j, h) = gkh*(q)/gkj*(q-1) for each possible pair (j, h) to examine whether relation y*κq = Y(1, h1)Y(2, h2) --- Y(N, hN) holds or not. But it is practically impossible when N is large enough, i.e. the number of combinations is N!.

In the above, if the product of all inputs and that of all outputs of whole mix-net M are examined, it is not necessary for V to verify mix-servers individually, but examinations of individual mix-servers make dishonest entity identification processes simple as will be discussed in Sec. 5.

4. Detecting Disrupted Data

It must be noted that if input and output pairs of mix-net M are not examined individually as in the previous section mix-server Sq can disrupt data without being detected by others. Namely, even if Sq re-encrypts 2 inputs {gkj*(q-1), Djy*kj*(q-1)} and {gkh*(q-1), Dhy*kh*(q-1)}, which correspond to data Dj and Dh owned by entities Pj and Ph, to {gkj*(q), UDjy*kj*(q)} and {gkh*(q), (Dh/U)y*kh*(q)} while generating integer U arbitrarily, pair {gkj*(q)+kh*(q)/gkj*(q-1)+kh*(q-1) = gkqj+kqh, (UDj)(Dh/U)y*kj*(q)+kh*(q)/DjDhy*kj*(q-1)+kh*(q-1) = y*kqj+kqh} is still a DDH pair. In the same way, even if Sq re-encrypts the above 2 inputs into {gkj*(q), Djy*kj*(q-1)y*kqh} and {gkh*(q), Dhy*kh*(q-1)y*kqj}, pair {gkj*(q)+kh*(q)/gkj*(q-1)+kh*(q-1) = gkqj+kqh, DjDhy*kj*(q-1)+kqh+kh*(q-1)+kqj/DjDhy*kj*(q-1)+kh*(q-1) = y*kqh+kqj} is still a DDH pair. These dishonesties of Sq can be detected when individual decryption results are disclosed in the 2nd BB, i.e. decryption results (UDj), (Dh/U), Djy*kqh-kqj or Dhy*kqj-kqh may become meaningless. But the probability that they have meanings is not 0. In addition more seriously, Pj and Ph, the owners of data Dj and Dh, may put meaningless data from the beginning. Therefore, a mechanism that enables any entity to determine whether the decryption results are disrupted by mix-servers or not is necessary.

In an optimistic verifiable mix-net [7], to detect disrupted decryption results, entity Pj encrypts Dj to {gaj, Djy*aj} = {ω, ς} and calculates hash value h(ω, ς) = Hj as a check-code for verifying validity of the decryption result, then constructs triple ∏(Dj) = <{gθj, ωy*θj}, {gτj, ςy*τj}, {gλj, Hjy*λj}>, in other words, encrypts elements in triple {ω, ς, Hj} individually. After that, mix-servers re-encrypts and decrypts ∏(Dj) instead of {ω, ς}, and they convince others that decrypted results are not disrupted by showing consistent check-code Hj in each decrypted form ∏(Dj), i.e. h(ω, ς) is an one-way function and no one other than Pj can generate consistent triple {ω, ς, Hj}. Then finally, {ω, ς} is decrypted to Dj.

The proposed scheme exploits unknown random numbers generated before the re-encryption stage as shown in Figure 2. When compared with optimistic verifiable mix-nets, an advantage is a check-code attached to data Dj is calculated from Dj itself instead of its encrypted form {gaj, Djy*aj}, and by this feature the proposed mechanism can easily avoid relation attacks [4, 11][4, 11], which optimistic verifiable mix-nets cannot cope with well, as discussed in Sec. 6. Here, anyone can confirm the honest calculation of the check-code while not knowing Dj itself without using additional mechanisms such as ZKPs as will be discussed later.

Figure 2. Unknown random number generation stage

Firstly, entity Pj, the owner of data Dj, generates its secret integers uj and bj, encrypts uj to {gbj, ujy*bj} and calculates Djuj, and mix-server S1 that receives pair <{gbj, ujy*bj}, Djuj> from Pj (it must be noted that under the strong RSA assumption it is computationally infeasible for S1 to know Dj or uj from Djuj) generates its secret integers r1j and s1j and calculates pair <{gbj+s1j, = gsj*(1), ujr1jy*bj+s1j = ujr1jy*sj*(1)}, {gr1j, (Djujy*)r1j = Dj(uj)(r1j)y*r1j}>. In the same way, Sq that receives pair <{gbj+s1j+ --- +s(q-1)j = gsj*(q-1), (ujr1j---r(q-1)j)y*bj+s1j+ --- +s(q-1)j = (ujr1j---r(q-1)j)y*sj*(q-1)}, {g(r1j)---(r(q-1)j), = grj*(q-1), Dj(uj)(r1j)---(r(q-1)j)y*(r1j)---(r(q-1)j) = Dj(uj)(r1j)---(r(q-1)j)y*rj*(q-1)}> from Sq-1 calculates <{gbj+s1j+ --- +sqj = gsj*(q), (ujr1j---r(q-1)j)rqjy*bj+s1j+ --- +sqj = (ujr1j---rqj)y*sj*(q)}, {g(r1j)---(rqj) = grj*(q), (Dj(uj)(r1j)---(r(q-1)j)y*(r1j)---(r(q-1)j))rqj = Dj(uj)(r1j)---(rqj)y*(r1j)---(r(qj) = Dj(uj)(r1j)---(rqj)y*rj*(q))>, while generating its secret integers rqj and sqj, and last mix-server SQ calculates pair <{gsj*(Q), (ujr1j---rQj)y*sj*(Q) = Rjy*sj*(Q)}, {grj*(Q), DjRjy*rj*(Q)}> . Then, while using A that is a constant integer common and known to all participants, Pj puts triple <{gaj, Djy*aj}, {gsj*(Q), Rjy*sj*(Q)}, {grj*(Q), DADjRjy*rj*(Q) = DjRj+Ay*rj*(Q) }> in mix-net M instead of {gaj, Djy*aj}, and based on integers kqj, tqj and vqj which are secrets of each mix-server Sq, M re-encrypts it to triple <{gaj+k1j+ --- +kQj = gkj*(Q), Djy*aj+k1j+ --- +kQj = Djy*kj*(Q)} , {gsj*(Q)+t1j+ --- +tQj = gtj*(Q), Rjy*sj*(Q)+t1j+ --- +tQj = Rjy*tj*(Q)}, {grj*(Q)+v1j+ --- +vQj = gvj*(Q), DjRj+Ay*rj*(q)+v1j+ --- +vQj = DjRj+Ay*vj*(Q)}> while shuffling it with other data. Therefore, when mix-servers in M behave honestly, the re-encrypted form of Dj is decrypted to triple <{gkj*(Q), Dj}, {gtj*(Q), Rj}, {gvj*(Q), DjRj+A}>, i.e. regardless that Dj has meanings or not, the decrypted triple is valid only when DjRj+A is calculated as Dj to the power of Rj+A.

As a consequence, to dishonestly re-encrypt the previous 2 triples that correspond to Dj and Dh to <{A1(j, q), A2(j, q)}, {B1(j, q), B2(j, q)}, {C1(j, q), C2(j, q)}> and <{A1(h, q), A2(h, q)}, {B1(h, q), B2(h, q)}, {C1(h, q), C2(h, q)}> without being detected by others, provided that they are finally decrypted to triples <{A1(j, Q), Γj}, {B1(j, Q), Ωj}, {C1(j, Q), Φj}> and <{A1(h, Q), Γh}, {B1(h, Q), Ωh}, {C1(h, Q), Φh}> respectively, Sq must define integers Γj, Ωj, Φj, Γh, Ωh and Φh so that relations ΓjΩj+A = Φj and ΓhΩh+A = Φh hold. But it is impossible unless Sq conspires with all other mix-servers and Pj and Ph. Namely, although Sq must know integers Rj = ujr1jr2j---rQj and Rh = uhr1hr2h---rQh to calculate Ωj and Ωh consistently, rzj, rzh, uj and uh are secrets of Sz, Pj and Ph respectively. Here, it must be noted that the mechanism discussed in this section works when it is combined with the one discussed in Sec. 3, e.g. without the mechanism in Sec.3, mix-server Sq that knows encryption key y* can re-encrypt the above triples that correspond to Dj and Dh at their will so that their final decryption results satisfy ΓjΩj+A = Φj and ΓhΩh+A = Φh.

But 2 problems must be taken into account. The 1st problem is even if mix-servers in the re-encryption stage had honestly re-encrypted their all inputs, each mix-server in the decryption stage can forge its output triples at their will in ways that they are finally decrypted to consistent triples because encryption keys y1, y2, ---, yQ and y* are public known. For example, Sq that receives triple <{gkj*(Q), Djg(x1+ --- +xq)kj*(Q)}, {gtj*(Q), Rjg(x1+ --- +xq)tj*(Q)}, {gvj*(Q), DjRj+Ag(x1+ --- +xq)}vj*(Q)}> from Sq+1 can decrypt it to <{gz1, Γj(y1y2---yq-1)z1 = Γjg{x1+ --- +x(q-1)}z1}, {gz2, Ωjg{x1+ --- +x(q-1)}z2}, {gz3, Φjg{x1+ --- +x(q-1)}z3} while arbitrarily defining integers z1, z2, z3, Γj, Ωj and Φj so that relation ΓjΩj+A = Φj holds. Also, if last mix-server SQ that is conspiring with 1st mix-server S1 replaces one of re-encrypted result in the 1st BB with Θ(Dj) = <{gaj+γ, Djy*aj+γ}, {gsj*(Q)+δ, Rjy*sj*(Q)+δ}, {grj*(Q)+ε, DjRj+Ay*rj*(Q)+ε}>, because it is decrypted to consistent value Dj and mix-servers in the decryption stage do not shuffle their decryption results, SQ can know Dj, the secret of Pj (SQ can easily calculate Θ(Dj) if S1 informs it of <{gaj, Djy*aj}, {gsj*(Q), Rjy*sj*(Q)}, {grj*(Q), DjRj+Ay*rj*(Q)}> put by Pj). Here, it may be possible to make each encryption key yq as Sq’s secret, but common encryption key y* must be publicly disclosed, and at least 1st mix-server SQ in the decryption stage can dishonestly forge its decryption results at its will as above. Therefore, honest behaviors of the decryption stage also must be verified.

Correct decryptions of data in the decryption stage also can be verified in the same way as in Sec. 3. Firstly S1 a representative of mix-servers calculates ga1ga2 --- gaN = ga1+a2+ --- +aN based on inputs from P1, P2, ---, PN, and asks each Sq to calculate and disclose Gq = g(a1+a2+ --- +aN)xq = yqa1+a2+ --- +aN. After that verifier V calculates Gqyqκ1+κ2+ --- +κQ = yqa1+a2+ --- +aN+κ1+κ2+ --- +κQ = yqk1*(Q)+k2*(Q)+ --- +kN*(Q) for each q, where V can know each sum κq = kq1+kq2+ --- +kqN because it is already disclosed in the re-encryption stage by Sq, also entities other than Pj cannot know y*aj from ga1+a2+ --- +aN. Then, it is straightforward for anyone to determine whether φq-1 = D1D2 --- DNg{x1+ --- +x(q-1)}{k1*(q)+k2*(Q)+ --- +kN*(Q)}, a product of all decryption results of Sq, is honestly calculated from φq = D1D2 --- DNg(x1+ --- +xq){k1*(q)+k2*(Q)+ --- +kN*(Q)}, a product of all inputs of Sq, or not. Namely, it is enough to confirm relations φqq-1 = gxq{k1*(Q)+k2*(Q)+ --- +kN*(Q)} = Gqyqκ1+κ2+ --- +κQ and gk1*(Q)+k2*(Q)+ --- +kN*(Q) = Gqgκ1+κ2+ --- +κQ. In the same way, anyone easily can verify decryption results of {gtj*(Q), Rjg(x1+ --- +xq)tj*(Q)} and {gvj*(Q), DjRj+Ag(x1+ --- +xq)}vj*(Q)}, and when Sq is dishonest, it is forced to decrypt its inputs again as in the re-encryption stage.

Sq in the decryption stage also can dishonestly decrypt 2 triples <{gkj*(Q), Djg(x1+ --- +xq)kj*(Q)}, {gtj*(Q), Rjg(x1+ --- +xq)tj*(Q)}, {gvj*(Q), DjRj+Ag(x1+ --- +xq)vj*(Q)}> and <{gkh*(Q), Dhg(x1+ --- +xq)k*(Q)}, {gth*(Q), Rhg(x1+ --- +xq)th*(Q)}, {gvh*(Q), DhRh+Ag(x1+ --- +xq)vh*(Q)} > to <{gkj*(Q), Γj(q-1)}, {gtj*(Q), Ωj(q-1)}, {gvj*(Q), Φj(q-1)}> and <{gkh*(Q), Γh(q-1)}, {gth*(Q), Ωh(q-1)}, {gvh*(Q), Φh(q-1)}> that are finally decrypted into <{gkj*(Q), Γj}}, {gtj*(Q), Ωj}, {gvj*(Q), Φj}> and <{gkh*(Q), Γh}, {gth*(Q), Ωh}, {gvh*(Q), Φh}}> while not being detected by the above mechanism. But anyone can detect this dishonesty because Sq cannot calculate Γj(q-1), Ωj(q-1), Φj(q-1), Γh(q-1), Ωh(q-1) or Φh(q-1) so that the final decryption results satisfy relations ΓjΩj+A = Φj and ΓhΩh+A = Φh.

As the 2nd problem to be considered, data owner Pj itself may disrupt its data to impute the liability to mix-net M, and when Pj or Sq in the random number generation stage calculates {gsj*(Q), Rjy*sj*(Q)} or {grj*(Q), DjRj+Ay*rj*(Q)} in triple <{gaj, Djy*aj}, {gsj*(Q), Rjy*sj*(Q)}, {grj*(Q), DjRj+Ay*rj*(Q)}> dishonestly, the decryption result of the triple becomes inconsistent even each mix-server in the re-encryption and the decryption stages had behaved honestly. The proposed mechanism copes with this problem by identifying entities that had behaved dishonestly as discussed in the next section, i.e. although Pj or Sq in the random number generation stage can behave dishonestly, entities liable for the dishonesties are finally revealed and they are forced to reprocess the data that correspond to the dishonesties.

Then, each mix-server can prove its honest behaviors to others while maintaining its decryption key xj and integers kqj, sqj, tqj and vqj as its secrets. Also, although Pj in the above discloses not only {gaj, Djy*aj} but also {gbj, ujy*bj}, Djuj, {gsj*(Q), Rjy*sj*(Q)} and {grj*(Q), DjRj+Ay*rj*(Q)}, apparently no one other than Pj can know that Pj owns Dj. In addition even Pj itself cannot identify the correspondence between triple <{gaj, Djy*aj}, {gsj*(Q), Rjy*sj*(Q)}, {grj*(Q), DjRj+Ay*rj*(Q)}> it had put in mix-net M and triple <{gkj*(Q), Dj}, {gtj*(Q), Rj}, {gvj*(Q), DjRj+A}> that M calculated except cases where Dj is unique to Pj, because no one knows integer Rj.

5. Identifications of Entities Liable for Data Disruptions

Entities that had behaved dishonestly can be identified easily as follows. Firstly, anyone can detect that some entity is dishonest if Pj or Sq had behaved dishonestly. namely, if Pj or Sq constructs {gsj*(Q), Rjy*sj*(Q)} or {grj*(Q), DjRj+Ay*rj*(Q)} dishonestly in the unknown random number generation stage, mix-net M decrypts input <{gaj, Djy*aj}, {gsj*(Q), Rjy*sj*(Q)}, {grj*(Q), DjRj+Ay*rj*(Q)}> to an inconsistent value, i.e. final decryption result Ψ = <{gkj*(Q), Γj}, {y*tj*(Q), Ωj}, {gvj*(Q), Φj}> does not satisfy relation ΓjΩj+A = Φj, and if Sq in the re-encryption stage re-encrypts <{gkj*(q-1), Djy*kj*(q-1)} , {gtj*(q-1), Rjy*tj*(q-1)}, {gvj*(q-1), DjRj+Ay*vj*(q-1)}> or that in the decryption stage decrypts <{gkj*(Q), Djg(x1+x2+ --- +xq)kj*(Q)} , {gtj*(Q), Rjg(x1+x2+ --- +xq)tj*(Q)}, {gvj*(Q), DjRj+Ag(x1+x2+ --- +xq)vj*(Q)}> dishonestly, product of Sq’s inputs and that of outputs constitute inconsistent pairs or final decryption result Ψ becomes inconsistent. Here, in cases where mix-servers generate inconsistent input output pairs, dishonest mix-servers are forced to repeat their re-encryptions or decryptions until the pairs become consistent; therefore all dishonesties are detected as inconsistent decryption results in the 2nd BB. Then, although a part of procedures require participations of mix-servers, once inconsistent decryption results are detected, verifier V can easily identify dishonest mix-servers as follows.

Firstly provided that decryption result Ψ in the 2nd BB is determined as inconsistent, V can identify dishonest entities by examining behaviors of mix-servers about only data that correspond to Ψ. Dishonest mix-servers in the decryption stage can be identified based on Diffie-Hellman scheme [1], namely, for each Sq in the decryption stage, verifier V calculates triples <Djg(x1+x2+ --- +xq)kj*(Q)/Djg{x1+x2+ --- +x(q-1)}kj*(Q) = ηqj, Rjg(x1+x2+ --- +xq)tj*(Q)/Rjg{x1+x2+ --- +x(q-1)}tj*(Q) = μqj, DjRj+Ag(x1+x2+ --- +xq)vj*(Q)/DjRj+Ag{x1+x2+ --- +x(q-1)}vj*(Q) = σqj> and <gkj*(Q)w1, gtj*(Q)w2, gvj*(Q)w3> while generating its secret integers w1, w2 and w3. After that Sq calculates (gkj*(Q)w1)xq, (gtj*(Q)w2)xq and (gvj*(Q)w3)xq by using its secret key xq, and V determines Sq is honest when relations (ηqj)w1 = (gkj*(Q)w1)xq, (μqj)w2 = (gtj*(Q)w2)xq and (σqj)w3 = (gvj*(Q)w3)xq hold. If Sq that knows xq is honest it is easy to calculate (gkj*(Q)w1)xq, (gtj*(Q)w2)xq and (gvj*(Q)w3)xq consistently. On the other hand under the strong RSA assumption it is computationally infeasible to calculate them consistently without knowing w1, w2, w3. By the same reason, V cannot know xq from (gkj*(Q)w1)xq, (gtj*(Q)w2)xq or (gvj*(Q)w3)xq either.

Provided that DI(j, q) and DO(j, q) are input and output pair of mix-server Sq in the re-encryption stage that corresponds to inconsistent decryption result Ψ, dishonest entities in the re-encryption stage are identified as follows. Here last mix-server SQ in the re-encryption stage can disclose its secret integers kQj, tQj, and vQj it had used for re-encrypting DI(j, Q) while maintaining confidentialities of individual data, i.e. no one can know other data, also integers kqj, tqj, and vqj are still secrets of Sq for q < Q. Therefore, V can easily determine whether SQ is honest or not, i.e. SQ is honest when DI(j, Q) is certainly an output that SQ-1 had generated and DI(j, Q) is successfully transformed to DO(j, Q) by kQj, tQj, vQj disclosed by SQ and public key y*. V can determine whether Sq (q < Q) is honest or not in the same way based on integers kqj, tqj and vqj disclosed by Sq. Also, even when V determines that S1 is dishonest, confidentiality of the data is still maintained because DI(j, 1) is still encrypted by its owner Pj.

When all mix-servers in the re-encryption stage are honest, Ψ = <{gkj*(Q), Γj}, {y*tj*(Q), Ωj}, {gvj*(Q), Φj}> is successfully corresponded to input <{gaj, Djy*aj}, {gsj*(Q), Rjy*sj*(Q)}, {grj*(Q), DjRj+Ay*rj*(Q)}> put by Pj, and to identify dishonest entities in the unknown random number generation stage, V asks SQ again to disclose its input and output pair <G(j, Q-1), H(j, Q-1)>, <{gsj*(Q), Rjy*sj*(Q)}, {grj*(Q), DjRjy*rj*(Q)}>} in the unknown random number generation stage with its secret integers rQj and sQj, and examines whether <G(j, Q-1), H(j, Q-1)> is a pair that was certainly calculated by SQ-1 and its re-encrypted form coincides with <{gsj*(Q), Rjy*sj*(Q)}, {grj*(Q), DjRjy*rj*(Q)}> or not when integers rQj and sQj are applied. In the same way, each Sq discloses its input and output pair <G(j, q-1), H(j, q-1)>, <G(j, q), H(j, q)> with its secret integers rqj and sqj when Sq+1 is determined as honest, and Sq* is identified as dishonest when its input and output pair <G(j, q*-1), H(j, q*-1)>, <G(j, q*), H(j, q*)> is inconsistent. When all mix-servers are determined as honest, V determines that Pj had put inconsistent triple in M from the beginning. About disclosures of rqj and sqj, Sq defines different values for different data; therefore these disclosures bring any inconvenience to honest data owners or mix-servers.

In the above, when dishonest servers are identified in the decryption or the re-encryption stage, data Dj owned by Pj and that was re-encrypted or decrypted dishonestly can be reprocessed while preserving privacy of Pj, e.g. if Sq in the re-encryption stage is identified as a dishonest mix-server, verifier V simply asks Sq, Sq+1, ---, SQ in the re-encryption stage and SQ, SQ-1, ---, S1 in the decryption stage to carry out their re-encryptions or decryptions again. But when dishonest mix-servers are those in the unknown random number generation stage or they include 1st mix-server S1 in the re-encryption stage, although Pj can preserve its privacy if Dj is not reprocessed, if Dj is reprocessed anyone can know the correspondence between Pj and its data Dj, because Dj is newly disclosed in the 2nd BB as the reprocessed data. However, in many cases mix-servers do not behave dishonestly because mix-servers are usually configured by authorities that cannot continue their businesses when their dishonesties are revealed, therefore Pj can preserve its privacy when it is honest. In other words, mechanisms discussed in this paper are prepared mainly for protecting mix-net M from dishonest entities that claim M is dishonest while generating inconsistent data by themselves, and usually the above problem is not serious.

6. Performance of the Proposed Mechanism

6.1. Numbers of Interactions

Regarding computation volumes, schemes based on approaches adopted in optimistic verifiable mix-nets also convince verifiers that given pairs of data constitute DDH pairs without examining the pairs individually [6, 7], i.e. each mix-server Sq and verifier V exchange information about aggregated values of inputs and outputs of Sq, and consequently, computation volumes required for verifications do not depend on the numbers of data. But V and mix-server M must interact with each other while exchanging volume of data that is proportional to the number of mix-servers.

A distinctive advantage of the proposed mechanism compared with them is once a value about aggregated encryption parameters of each mix-server Sq (i.e. κq = kq1+kq2+ --- +kqN) is disclosed anyone can verify validities of given pairs by itself without any interaction with mix-servers. This means that mix-net M can convince all relevant entities that the pairs are valid without additional computations even in environments where trustworthy verifiers cannot be assumed. Different from the proposed mechanism, if entities in an optimistic verifiable mix-net cannot trust V as their representative, the mix-net must carry out same calculations for each entity individually while interacting with it.

As an exception, although procedures in the proposed mechanism for identifying entities liable for disrupted decryption results require interactions between verifiers and mix-servers, these procedures are carried out only for inconsistent data when they are detected. Regarding consistencies of individual decryption results, different from verifications of DDH pairs, each decryption result must be examined individually, as a consequence the volume of computations for verifying M’s behaviors increases. But consistency of each decryption result <{gkj*(Q), Dj}, {gtj*(Q), Rj}, {gvj*(Q), DjRj+A}> can be verified without participations of mix-servers as same as in DDH pair verifications, i.e. anyone can determine whether Dj, Rj and DjRj+A are consistent or not by calculating DjRj+A by itself based on Dj and Rj disclosed in the 2nd BB. Therefore, verifications of individual decryption results do not require any computation for M. Also they can be verified in parallel by multiple entities without waiting responses from mix-servers, and the performance of the proposed mechanism can be maintained.

The unknown random number generation stage that does not exist in already existing verifiable mix-nets is also a cause of performance degradation, but when the fact that any entity can verify correct behaviors of mix-nets by itself without interacting with mix-servers is considered many applications can accept this inconvenience.

6.2. Performance against Attacks

It is reported that optimistic verifiable mix-nets are weak against relational attacks [11]. Namely, if 2-entities Ph and Pm find ∏(Dj) = <{gθj, ωy*θj}, {gτj, ςy*τj}, {gλj, Hjy*λj}> that Pj had put in mix-net M (where, ω = gaj and ς = Djy*aj), and generate triples <{gθh, (gθj)γy*θh}, {gτh, (ωy*θj)γy*τh}, {gλh, h((gθj)γ, (ωy*θj)γ)y*λh}> and <{gθm, (gθj)δy*θm}, {gτm, (ωy*θj)δy*τm}, {gλm, h((gθj)δ, (ωy*θj)δ)y*λm}>, respectively to put them in M, they are finally decrypted to consistent triples <(gθj)γ, (ωy*θj)γ, h((gθj)γ, (ωy*θj)γ)> and <(gθj)δ, (ωy*θj)δ, h((gθj)δ, (ωy*θj)δ)>, therefore Ph and Pm can identify their data by examining all decryption results to find pair gθjγ and gθjδ that satisfies relation (gθjγ)δ = (gθjδ)γ. In addition, ∏(Dj) is decrypted to <ω, ς, Hj>, and identified triple <(gθj)γ, (ωy*θj)γ, h((gθj)γ, (ωy*θj)γ)> is decrypted to ωγ. Then, Ph and Pm can know Pj’s data Dj, i.e. Dj is the decrypted form of triple <ω, ς, Hj> that has value ω =(ωγ)1/γ as its 1st element.

Here, the reason why optimistic verifiable mix-nets are weak against relational attacks is hash value Hj in the above is calculated based on the encrypted form of original data Dj. Namely, entities other than Pj that do not know the decrypted value of ∏(Dj) also can generate consistent triples from it. Different from verifiable mix-nets, entities in the proposed mechanism cannot generate consistent encrypted forms without knowing their decrypted forms; therefore relational attacks can be avoided.

7. Conclusion

A simplified scheme of verifiable mix-nets is proposed. In environments where the number of data is sufficiently large and decryption results of individual data, i.e. outputs of mix-nets, can be publicly disclosed, the proposed scheme works well without using time consuming ZKPs. More importantly, entities can verify encryption and decryption results by themselves without interacting with mix-servers. The scheme also can effectively protect itself from various attacks. Its primary verification procedures do not examine correct re-encryptions or decryptions of individual data, and although consistency of decrypted data must be examined individually, they can be examined by anyone without communicating with mix-servers, i.e. a set of decryption results can be examined by multiple entities in parallel. Therefore highly efficient e-voting and e-poll systems, in which decrypted votes and opinions can be publicly disclosed, can be developed based on the proposed mix-nets while preserving privacies of participants and ensuring honest behaviors of authorities.

As future works, inconveniences that privacy of data owners may be invaded when dishonest entities are identified in the unknown random number generation stage must be removed although usually these inconveniences occur when data owners themselves are dishonest. A simple solution is to let data owners access mix-nets anonymously (in the above, data owners are assumed that they access mix-nets while revealing their identities). But this solution cannot disable owners of the disrupted data themselves to know correspondences between them and their decrypted data in the 2nd BB because the owners know their inputs that are finally corresponded to the disrupted outputs. Also, data owner Pj can know the correspondence between it and its data Dj in the 2nd BB when Dj is unique to Pj as mentioned in previous sections. Therefore the proposed scheme cannot work perfectly in e-voting systems. Namely, if Dj is a vote of Pj in an e-voting system, an entity, which is forcing Pj to abstain from voting by casting an unique invalid vote, can easily confirm whether Pj behaved as it had asked or not by examining the 2nd BB. To make e-voting systems more secure, schemes for convincing voters that encrypted Dj is a vote for a non-winner (a candidate that cannot acquire enough number of votes) so that election authorities do not need to decrypt unique votes [15] must be developed.

References

[1]  Diffie and M. E. Hellman, “New Directions in Cryptography,” IEEE Trans. On Information Theory, IT-22(6), 644-654, 1976.
In article      
 
[2]  M. Blum, P. Feldman and S. Micali, “Non-interactive Zero-knowledge and Its Applications,” Proc. of the 20th Annual ACM Symposium on Theory of Computing, 103-112, 1988.
In article      
 
[3]  S. Goldwasser, S. Micali and C. Rackoff, “The Knowledge Complexity of Interactive Proof System,” SIAM Journal on Computing, 18(1), 291-304, 1989.
In article      CrossRef
 
[4]  B. Pfitzmann, “Breaking an Efficient Anonymous Channel,” Eurocrypt’95, LNCS 950, 332-340, 1995.
In article      
 
[5]  M. Abe, “Universally Verifiable Mix-Net with Verification Work Independent of the Number of Mix-Servers,” IEICE Trans. Fundamentals, E83-A(7), 1431-1440, 2000.
In article      
 
[6]  D. Boneh and P. Golle, “Almost Entirely Correct Mixing with Applications to Voting,” ACM Conference on Computer and Communication Security, 68-77, 2002.
In article      
 
[7]  P. Golle, S. Zhong, D. Boneh, M. Jakobsson and A. Juels, “Optimistic Mixing for Exit-Polls,” Asiacrypt 2002, 451-465, 2002.
In article      
 
[8]  M. Jakobson, A. Juels and R. Rivest, “Making Mix Nets Robust for Electronic Voting by Randomized Partial Checking,” USENIX Security ’02, 339-353, 2002.
In article      
 
[9]  L. Nguen, R. Dafavi-Naini and K. Kurosawa, “Verifiable Shuffles: A Formal Model and a Paillier-based Efficient Construction with Provable Security,” PKC 2004, LNCS 2248, 61-75, 2004.2002.
In article      
 
[10]  J. Furukawa, “Efficient, Verifiable Shuffle Decryption and Its Requirement of Unlinkability,” PKC 2004, LNCS 2248, 319-332, 2004.
In article      
 
[11]  D. Wikstrom, “Five Practical Attacks for Optimistic Mixing for Exit-Polls,” Proceedings of SAC 2003, 160-175, 2004.
In article      
 
[12]  K. Sampigethaya and R. Poovendran, “A Framework and Taxonomy for Comparison of Electronic Voting Schemes,” Elsevier Computers and Security, 25, 137-153, 2006.
In article      CrossRef
 
[13]  S. Weber, “A Coercion-Resistant Cryptographic Voting Protocol -Evaluation and Prototype Implementation,” Diploma thesis, Darmstadt University of Technology; 2006.
In article      
 
[14]  K. A. Md Rokibul, S. Tamura, S. Taniguchi and T. Yanase, “An Anonymous Voting Scheme based on Confirmation Numbers,” IEEJ Trans. EIS. 130(11), 2065-2073, 2010.
In article      CrossRef
 
[15]  S. Tamura, “Anonymous Security Systems and Applications: Requirements and Solutions,” Information Science Reference, 2012.
In article      CrossRef
 
  • CiteULikeCiteULike
  • Digg ThisDigg
  • MendeleyMendeley
  • RedditReddit
  • Google+Google+
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn