MPCTensor

An MPCTensor is a CrypTensor encrypted using the secure MPC protocol. In order to support the mathematical operations required by the MPCTensor, CrypTen implements two kinds of secret-sharing protocols defined by ptype:

  • crypten.mpc.arithmetic for arithmetic secret-sharing

  • crypten.mpc.binary for binary secret-sharing

Arithmetic secret sharing forms the basis for most of the mathematical operations implemented by MPCTensor. Similarly, binary secret-sharing allows for the evaluation of logical expressions.

We can use the ptype attribute to create a CrypTensor with the appropriate secret-sharing protocol. For example:

# arithmetic secret-shared tensors
x_enc = crypten.cryptensor([1.0, 2.0, 3.0], ptype=crypten.mpc.arithmetic)
print("x_enc internal type:", x_enc.ptype)

# binary secret-shared tensors
y_enc = crypten.cryptensor([1, 2, 1], ptype=crypten.mpc.binary)
print("y_enc internal type:", y_enc.ptype)

We also provide helpers to execute secure multi-party computations in separate processes (see Communicator).

For technical details see Damgard et al. 2012 and Beaver 1991 outlining the Beaver protocol used in our implementation.

For examples illustrating arithmetic and binary secret-sharing in CrypTen, the ptype attribute, and the execution of secure multi-party computations, please see Tutorial 2.

Tensor Operations

class crypten.mpc.mpc.ConfigManager(*args)

Use this to temporarily change a value in the mpc.config object. The following sets config.exp_iterations to 10 for one function invocation and then sets it back to the previous value:

with ConfigManager("exp_iterations", 10):
    tensor.exp()
class crypten.mpc.mpc.MPCConfig(exp_iterations: int = 8, reciprocal_method: str = 'NR', reciprocal_nr_iters: int = 10, reciprocal_log_iters: int = 1, reciprocal_all_pos: bool = False, reciprocal_initial: any = None, sigmoid_tanh_method: str = 'reciprocal', sigmoid_tanh_terms: int = 32, sigmoid_tanh_clip_value: int = 1, log_iterations: int = 2, log_exp_iterations: int = 8, log_order: int = 8, _eix_iterations: int = 10, max_method: str = 'log_reduction')

A configuration object for use by the MPCTensor.

class crypten.mpc.mpc.MPCTensor(tensor, ptype=<ptype.arithmetic: 0>, device=None, *args, **kwargs)
abs()

Computes the absolute value of a tensor

adaptive_avg_pool2d(output_size)

Applies a 2D adaptive average pooling over an input signal composed of several input planes.

See AdaptiveAvgPool2d for details and output shape.

Parameters

output_size – the target output size (single integer or double-integer tuple)

adaptive_max_pool2d(output_size, return_indices=False)

Applies a 2D adaptive max pooling over an input signal composed of several input planes.

See AdaptiveMaxPool2d for details and output shape.

Parameters
  • output_size – the target output size (single integer or double-integer tuple)

  • return_indices – whether to return pooling indices. Default: False

argmax(dim=None, keepdim=False, one_hot=True)

Returns the indices of the maximum value of all elements in the input tensor.

argmin(dim=None, keepdim=False, one_hot=True)

Returns the indices of the minimum value of all elements in the input tensor.

arithmetic()

Converts self._tensor to arithmetic secret sharing

bernoulli()

Returns a tensor with elements in {0, 1}. The i-th element of the output will be 1 with probability according to the i-th value of the input tensor.

binary()

Converts self._tensor to binary secret sharing

static cat(*args, **kwargs)

Forward function that stores data for autograd in result.

clone()

Create a deep copy of the input tensor.

copy_(other)

Copies value of other MPCTensor into this MPCTensor.

cos()

Computes the cosine of the input using cos(x) = Re{exp(i * x)}

Parameters

iterations (int) – for approximating exp(i * x)

cossin()

Computes cosine and sine of input via exp(i * x).

Parameters

iterations (int) – for approximating exp(i * x)

cpu()

Call torch.Tensor.cpu on the underlying share

cuda(*args, **kwargs)

Call torch.Tensor.cuda on the underlying share

property device

Return the torch.device of the underlying share

div(y)

Divides each element of self with the scalar y or each element of the tensor y and returns a new resulting tensor.

For y a scalar:

\[\text{out}_i = \frac{\text{self}_i}{\text{y}}\]

For y a tensor:

\[\text{out}_i = \frac{\text{self}_i}{\text{y}_i}\]

Note for y a tensor, the shapes of self and y must be broadcastable.

div_(y)

In-place version of div()

dropout(p=0.5, training=True, inplace=False)

Randomly zeroes some of the elements of the input tensor with probability p.

Parameters
  • p – probability of a channel to be zeroed. Default: 0.5

  • training – apply dropout if is True. Default: True

  • inplace – If set to True, will do this operation in-place. Default: False

dropout2d(p=0.5, training=True, inplace=False)

Randomly zero out entire channels (a channel is a 2D feature map, e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 2D tensor \(\text{input}[i, j]\)) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

Parameters
  • p – probability of a channel to be zeroed. Default: 0.5

  • training – apply dropout if is True. Default: True

  • inplace – If set to True, will do this operation in-place. Default: False

dropout3d(p=0.5, training=True, inplace=False)

Randomly zero out entire channels (a channel is a 3D feature map, e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 3D tensor \(\text{input}[i, j]\)) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

Parameters
  • p – probability of a channel to be zeroed. Default: 0.5

  • training – apply dropout if is True. Default: True

  • inplace – If set to True, will do this operation in-place. Default: False

property encoder

Returns underlying encoder

eq(y, _scale=True)

Returns self == y

exp()

Approximates the exponential function using a limit approximation:

\[exp(x) = \lim_{n \rightarrow \infty} (1 + x / n) ^ n\]

Here we compute exp by choosing n = 2 ** d for some large d equal to iterations. We then compute (1 + x / n) once and square d times.

Set the number of iterations for the limit approximation with config.exp_iterations.

ge(y, _scale=True)

Returns self >= y

get_plain_text(dst=None)

Decrypts the tensor.

gt(y, _scale=True)

Returns self > y

index_add(dim, index, tensor)

Performs out-of-place index_add: Accumulate the elements of tensor into the self tensor by adding to the indices in the order given in index.

index_add_(dim, index, tensor)

Performs in-place index_add: Accumulate the elements of tensor into the self tensor by adding to the indices in the order given in index.

property is_cuda

Return True if the underlying share is stored on GPU, False otherwise

le(y, _scale=True)

Returns self <= y

log()

Approximates the natural logarithm using 8th order modified Householder iterations. This approximation is accurate within 2% relative error on [0.0001, 250].

Iterations are computed by: \(h = 1 - x * exp(-y_n)\)

\[y_{n+1} = y_n - \sum_k^{order}\frac{h^k}{k}\]
Parameters
  • iterations (int) – number of Householder iterations for the approximation

  • exp_iterations (int) – number of iterations for limit approximation of exp

  • order (int) – number of polynomial terms used (order of Householder approx)

log_softmax(dim, **kwargs)

Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.

lt(y, _scale=True)

Returns self < y

max(dim=None, keepdim=False, one_hot=True)

Returns the maximum value of all elements in the input tensor.

max_pool2d(kernel_size, padding=None, stride=None, return_indices=False)

Applies a 2D max pooling over an input signal composed of several input planes.

min(dim=None, keepdim=False, one_hot=True)

Returns the minimum value of all elements in the input tensor.

ne(y, _scale=True)

Returns self != y

static new(*args, **kwargs)

Creates a new MPCTensor, passing all args and kwargs into the constructor.

norm(p='fro', dim=None, keepdim=False)

Computes the p-norm of the input tensor (or along a dimension).

pad(pad, mode='constant', value=0)

Pads tensor with constant.

polynomial(coeffs, func='mul')

Computes a polynomial function on a tensor with given coefficients, coeffs, that can be a list of values or a 1-D tensor.

Coefficients should be ordered from the order 1 (linear) term first, ending with the highest order term. (Constant is not included).

pos_pow(p)

Approximates self ** p by computing: \(x^p = exp(p * log(x))\)

Note that this requires that the base self contain only positive values since log can only be computed on positive numbers.

Note that the value of p can be an integer, float, public tensor, or encrypted tensor.

pow(p, **kwargs)

Computes an element-wise exponent p of a tensor, where p is an integer.

pow_(p, **kwargs)

In-place version of pow_ function

static rand(*sizes, device=None)

Returns a tensor with elements uniformly sampled in [0, 1). The uniform random samples are generated by generating random bits using fixed-point encoding and converting the result to an ArithmeticSharedTensor.

static randn(*sizes, device=None)

Returns a tensor with normally distributed elements. Samples are generated using the Box-Muller transform with optimizations for numerical precision and MPC efficiency.

reciprocal()
'NR'

Newton-Raphson method computes the reciprocal using iterations of \(x_{i+1} = (2x_i - self * x_i^2)\) and uses \(3*exp(-(x-.5)) + 0.003\) as an initial guess by default

'log'

Computes the reciprocal of the input from the observation that: \(x^{-1} = exp(-log(x))\)

Configuration params:

reciprocal_method (str): One of ‘NR’ or ‘log’. reciprocal_nr_iters (int): determines the number of Newton-Raphson iterations to run

for the NR method

reciprocal_log_iters (int): determines the number of Householder

iterations to run when computing logarithms for the log method

reciprocal_all_pos (bool): determines whether all elements of the

input are known to be positive, which optimizes the step of computing the sign of the input.

reciprocal_initial (tensor): sets the initial value for the

Newton-Raphson method. By default, this will be set to :math: 3*exp(-(x-.5)) + 0.003 as this allows the method to converge over a fairly large domain

relu()

Compute a Rectified Linear function on the input tensor.

reveal(dst=None)

Decrypts the tensor without any downscaling.

scatter(dim, index, src)

Out-of-place version of MPCTensor.scatter_()

scatter_(dim, index, src)

Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

scatter_add(dim, index, other)

Adds all values from the tensor other into self at the indices specified in the index tensor.

scatter_add_(dim, index, other)

Adds all values from the tensor other into self at the indices specified in the index tensor.

set(enc_tensor)

Sets self encrypted to enc_tensor in place by setting shares of self to those of enc_tensor.

Parameters

enc_tensor (MPCTensor) – with encrypted shares.

shallow_copy()

Create a shallow copy of the input tensor.

property share

Returns underlying share

sigmoid()

Computes the sigmoid function using the following definition

\[\sigma(x) = (1 + e^{-x})^{-1}\]
If a valid method is given, this function will compute sigmoid

using that method:

“chebyshev” - computes tanh via Chebyshev approximation with

truncation and uses the identity:

\[\sigma(x) =\]

rac{1}{2}tanh( rac{x}{2}) + rac{1}{2}

Args:
terms (int): highest degree of Chebyshev polynomials for tanh

using Chebyshev approximation. Must be even and at least 6.

sign(_scale=True)

Computes the sign value of a tensor (0 is considered positive)

sin()

Computes the sine of the input using sin(x) = Im{exp(i * x)}

Parameters

iterations (int) – for approximating exp(i * x)

softmax(dim, **kwargs)

Compute the softmax of a tensor’s elements along a given dimension

sqrt()

Computes the square root of the input by raising it to the 0.5 power

static stack(*args, **kwargs)

Forward function that stores data for autograd in result.

tanh()

Computes the hyperbolic tangent function using the identity

\[tanh(x) = 2\sigma(2x) - 1\]

If a valid method is given, this function will compute tanh using that method:

“chebyshev” - computes tanh via Chebyshev approximation with truncation.

\[tanh(x) = \sum_{j=1}^terms c_{2j - 1} P_{2j - 1} (x / maxval)\]

where c_i is the ith Chebyshev series coefficient and P_i is ith polynomial. The approximation is truncated to +/-1 outside [-maxval, maxval].

Parameters

terms (int) – highest degree of Chebyshev polynomials. Must be even and at least 6.

to(*args, **kwargs)

Depending on the input arguments, converts underlying share to the given ptype or performs torch.to on the underlying torch tensor

To convert underlying share to the given ptype, call to as:

to(ptype, **kwargs)

It will call MPCTensor.to_ptype with the arguments provided above.

Otherwise, to performs torch.to on the underlying torch tensor. See https://pytorch.org/docs/stable/tensors.html?highlight=#torch.Tensor.to for a reference of the parameters that can be passed in.

Parameters

ptype – Ptype.arithmetic or Ptype.binary.

weighted_index(dim=None)

Returns a tensor with entries that are one-hot along dimension dim. These one-hot entries are set at random with weights given by the input self.

Examples:

>>> encrypted_tensor = MPCTensor(torch.tensor([1., 6.]))
>>> index = encrypted_tensor.weighted_index().get_plain_text()
# With 1 / 7 probability
torch.tensor([1., 0.])

# With 6 / 7 probability
torch.tensor([0., 1.])
weighted_sample(dim=None)

Samples a single value across dimension dim with weights corresponding to the values in self

Returns the sample and the one-hot index of the sample.

Examples:

>>> encrypted_tensor = MPCTensor(torch.tensor([1., 6.]))
>>> index = encrypted_tensor.weighted_sample().get_plain_text()
# With 1 / 7 probability
(torch.tensor([1., 0.]), torch.tensor([1., 0.]))

# With 6 / 7 probability
(torch.tensor([0., 6.]), torch.tensor([0., 1.]))
where(condition, y)

Selects elements from self or y based on condition

Parameters
  • condition (torch.bool or MPCTensor) – when True yield self, otherwise yield y

  • y (torch.tensor or MPCTensor) – values selected at indices where condition is False.

Returns: MPCTensor or torch.tensor

Communicator

To execute multi-party computations locally, we provide a @mpc.run_multiprocess function decorator, which we developed to execute CrypTen code from a single script. CrypTen follows the standard MPI programming model: it runs a separate process for each party, but each process runs an identical (complete) program. Each process has a rank variable to identify itself.

For example, two-party arithmetic secret-sharing:

import crypten
import crypten.communicator as comm

@mpc.run_multiprocess(world_size=2)
def examine_arithmetic_shares():
    x_enc = crypten.cryptensor([1, 2, 3], ptype=crypten.mpc.arithmetic)

    rank = comm.get().get_rank()
    print(f"Rank {rank}:\n {x_enc}")

x = examine_arithmetic_shares()
crypten.mpc.context.run_multiprocess(world_size)

Defines decorator to run function across multiple processes

Parameters

world_size (int) – number of parties / processes to initiate.

crypten.communicator.Communicator.get_world_size(self)

Returns the size of the world.

crypten.communicator.Communicator.get_rank(self)

Returns the rank of the current process.