# MPCTensor¶

An MPCTensor is a CrypTensor encrypted using the secure MPC protocol. In order to support the mathematical operations required by the MPCTensor, CrypTen implements two kinds of secret-sharing protocols defined by ptype:

• crypten.arithmetic for arithmetic secret-sharing

• crypten.binary for binary secret-sharing

Arithmetic secret sharing forms the basis for most of the mathematical operations implemented by MPCTensor. Similarly, binary secret-sharing allows for the evaluation of logical expressions.

We can use the ptype attribute to create a CrypTensor with the appropriate secret-sharing protocol. For example:

# arithmetic secret-shared tensors
x_enc = crypten.cryptensor([1.0, 2.0, 3.0], ptype=crypten.arithmetic)
print("x_enc internal type:", x_enc.ptype)

# binary secret-shared tensors
y_enc = crypten.cryptensor([1, 2, 1], ptype=crypten.binary)
print("y_enc internal type:", y_enc.ptype)


We also provide helpers to execute secure multi-party computations in separate processes (see Communicator).

For technical details see Damgard et al. 2012 and Beaver 1991 outlining the Beaver protocol used in our implementation.

For examples illustrating arithmetic and binary secret-sharing in CrypTen, the ptype attribute, and the execution of secure multi-party computations, please see Tutorial 2.

## Tensor Operations¶

class crypten.mpc.mpc.MPCTensor(input, ptype=<ptype.arithmetic: 0>, *args, **kwargs)
abs()

Computes the absolute value of a tensor

argmax(dim=None, keepdim=False, one_hot=False)

Returns the indices of the maximum value of all elements in the input tensor.

argmin(dim=None, keepdim=False, one_hot=False)

Returns the indices of the minimum value of all elements in the input tensor.

arithmetic()

Converts self._tensor to arithmetic secret sharing

bernoulli()

Returns a tensor with elements in {0, 1}. The i-th element of the output will be 1 with probability according to the i-th value of the input tensor.

binary()

Converts self._tensor to binary secret sharing

cos(iterations=10)

Computes the cosine of the input using cos(x) = Re{exp(i * x)}

Parameters

iterations (int) – for approximating exp(i * x)

cossin(iterations=10)

Computes cosine and sine of input via exp(i * x).

Parameters

iterations (int) – for approximating exp(i * x)

div(y)

Divides each element of self with the scalar y or each element of the tensor y and returns a new resulting tensor.

For y a scalar:

$\text{out}_i = \frac{\text{self}_i}{\text{y}}$

For y a tensor:

$\text{out}_i = \frac{\text{self}_i}{\text{y}_i}$

Note for y a tensor, the shapes of self and y must be broadcastable.

div_(y)

In-place version of div()

dropout(p=0.5, training=True, inplace=False)

Randomly zeroes some of the elements of the input tensor with probability p.

Parameters
• p – probability of a channel to be zeroed. Default: 0.5

• training – apply dropout if is True. Default: True

• inplace – If set to True, will do this operation in-place. Default: False

dropout2d(p=0.5, training=True, inplace=False)

Randomly zero out entire channels (a channel is a 2D feature map, e.g., the $$j$$-th channel of the $$i$$-th sample in the batched input is a 2D tensor $$\text{input}[i, j]$$) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

Parameters
• p – probability of a channel to be zeroed. Default: 0.5

• training – apply dropout if is True. Default: True

• inplace – If set to True, will do this operation in-place. Default: False

dropout3d(p=0.5, training=True, inplace=False)

Randomly zero out entire channels (a channel is a 3D feature map, e.g., the $$j$$-th channel of the $$i$$-th sample in the batched input is a 3D tensor $$\text{input}[i, j]$$) of the input tensor). Each channel will be zeroed out independently on every forward call with probability p using samples from a Bernoulli distribution.

Parameters
• p – probability of a channel to be zeroed. Default: 0.5

• training – apply dropout if is True. Default: True

• inplace – If set to True, will do this operation in-place. Default: False

eq(y)

Returns self == y

exp(iterations=8)

Approximates the exponential function using a limit approximation:

$exp(x) = \lim_{n \rightarrow \infty} (1 + x / n) ^ n$

Here we compute exp by choosing n = 2 ** d for some large d equal to iterations. We then compute (1 + x / n) once and square d times.

Parameters

iterations (int) – number of iterations for limit approximation

ge(y)

Returns self >= y

get_plain_text()

Decrypts the tensor

gt(y)

Returns self > y

index_add(dim, index, tensor)

Performs out-of-place index_add: Accumulate the elements of tensor into the self tensor by adding to the indices in the order given in index.

index_add_(dim, index, tensor)

Performs in-place index_add: Accumulate the elements of tensor into the self tensor by adding to the indices in the order given in index.

le(y)

Returns self <= y

log(iterations=2, exp_iterations=8, order=8)

Approximates the natural logarithm using 8th order modified Householder iterations. This approximation is accurate within 2% relative error on [0.0001, 250].

Iterations are computed by: $$h = 1 - x * exp(-y_n)$$

$y_{n+1} = y_n - \sum_k^{order}\frac{h^k}{k}$
Parameters
• iterations (int) – number of Householder iterations for the approximation

• exp_iterations (int) – number of iterations for limit approximation of exp

• order (int) – number of polynomial terms used (order of Householder approx)

log_softmax(dim, **kwargs)

Applies a softmax followed by a logarithm. While mathematically equivalent to log(softmax(x)), doing these two operations separately is slower, and numerically unstable. This function uses an alternative formulation to compute the output and gradient correctly.

lt(y)

Returns self < y

max(dim=None, keepdim=False, one_hot=False)

Returns the maximum value of all elements in the input tensor.

max_pool2d(kernel_size, padding=None, stride=None, return_indices=False)

Applies a 2D max pooling over an input signal composed of several input planes.

min(dim=None, keepdim=False, one_hot=False)

Returns the minimum value of all elements in the input tensor.

ne(y)

Returns self != y

static new(*args, **kwargs)

Creates a new MPCTensor, passing all args and kwargs into the constructor.

norm(p='fro', dim=None, keepdim=False)

Computes the p-norm of the input tensor (or along a dimension).

pad(pad, mode='constant', value=0)

Pads tensor with constant.

polynomial(coeffs, func='mul')

Computes a polynomial function on a tensor with given coefficients, coeffs, that can be a list of values or a 1-D tensor.

Coefficients should be ordered from the order 1 (linear) term first, ending with the highest order term. (Constant is not included).

pos_pow(p)

Approximates self ** p by computing: $$x^p = exp(p * log(x))$$

Note that this requires that the base self contain only positive values since log can only be computed on positive numbers.

Note that the value of p can be an integer, float, public tensor, or encrypted tensor.

pow(p, **kwargs)

Computes an element-wise exponent p of a tensor, where p is an integer.

pow_(p, **kwargs)

In-place version of pow_ function

reciprocal(method='NR', nr_iters=10, log_iters=1, all_pos=False)
'NR'

Newton-Raphson method computes the reciprocal using iterations of $$x_{i+1} = (2x_i - self * x_i^2)$$ and uses $$3*exp(-(x-.5)) + 0.003$$ as an initial guess

'log'

Computes the reciprocal of the input from the observation that: $$x^{-1} = exp(-log(x))$$

Parameters
• nr_iters (int) – determines the number of Newton-Raphson iterations to run for the NR method

• log_iters (int) – determines the number of Householder iterations to run when computing logarithms for the log method

• all_pos (bool) – determines whether all elements of the input are known to be positive, which optimizes the step of computing the sign of the input.

relu()

Compute a Rectified Linear function on the input tensor.

scatter(dim, index, src)

Out-of-place version of MPCTensor.scatter_()

scatter_(dim, index, src)

Writes all values from the tensor src into self at the indices specified in the index tensor. For each value in src, its output index is specified by its index in src for dimension != dim and by the corresponding value in index for dimension = dim.

scatter_add(dim, index, other)

Adds all values from the tensor other into self at the indices specified in the index tensor.

scatter_add_(dim, index, other)

Adds all values from the tensor other into self at the indices specified in the index tensor.

set(enc_tensor)

Sets self encrypted to enc_tensor in place by setting shares of self to those of enc_tensor.

Parameters

enc_tensor (MPCTensor) – with encrypted shares.

shallow_copy()

Create a shallow copy of the input tensor

property share

Returns underlying _tensor

sigmoid(reciprocal_method='NR')
Computes the sigmoid function on the input value

sigmoid(x) = (1 + exp(-x))^{-1}

For numerical stability, we compute this by:

sigmoid(x) = (sigmoid(|x|) - 0.5) * sign(x) + 0.5

sign(scale=True)

Computes the sign value of a tensor (0 is considered positive)

sin(iterations=10)

Computes the sine of the input using sin(x) = Im{exp(i * x)}

Parameters

iterations (int) – for approximating exp(i * x)

softmax(dim, **kwargs)

Compute the softmax of a tensor’s elements along a given dimension

sqrt()

Computes the square root of the input by raising it to the 0.5 power

tanh(reciprocal_method='NR')

Computes tanh from the sigmoid function: tanh(x) = 2 * sigmoid(2 * x) - 1

to(ptype, **kwargs)

Converts self._tensor to the given ptype

Parameters

ptype – Ptype.arithmetic or Ptype.binary.

where(condition, y)

Selects elements from self or y based on condition

Parameters
• condition (torch.bool or MPCTensor) – when True yield self, otherwise yield y

• y (torch.tensor or MPCTensor) – values selected at indices where condition is False.

Returns: MPCTensor or torch.tensor

## Communicator¶

To execute multi-party computations locally, we provide a @mpc.run_multiprocess function decorator, which we developed to execute CrypTen code from a single script. CrypTen follows the standard MPI programming model: it runs a separate process for each party, but each process runs an identical (complete) program. Each process has a rank variable to identify itself.

For example, two-party arithmetic secret-sharing:

import crypten
import crypten.communicator as comm

@mpc.run_multiprocess(world_size=2)
def examine_arithmetic_shares():
x_enc = crypten.cryptensor([1, 2, 3], ptype=crypten.arithmetic)

rank = comm.get().get_rank()
print(f"Rank {rank}:\n {x_enc}")

x = examine_arithmetic_shares()

crypten.mpc.context.run_multiprocess(world_size)

Defines decorator to run function across multiple processes

Parameters

world_size (int) – number of parties / processes to initiate.

crypten.communicator.Communicator.get_world_size(self)

Returns the size of the world.

crypten.communicator.Communicator.get_rank(self)

Returns the rank of the current process.