cholesky
functionkeras.ops.cholesky(x)
Computes the Cholesky decomposition of a positive semi-definite matrix.
Arguments
(..., M, M)
.Returns
A tensor of shape (..., M, M)
representing the lower triangular
Cholesky factor of x
.
det
functionkeras.ops.det(x)
Computes the determinant of a square tensor.
Arguments
(..., M, M)
.Returns
A tensor of shape (...,)
representing the determinant of x
.
eig
functionkeras.ops.eig(x)
Computes the eigenvalues and eigenvectors of a square matrix.
Arguments
(..., M, M)
.Returns
(..., M)
containing
eigenvalues and a tensor of shape (..., M, M)
containing eigenvectors.eigh
functionkeras.ops.eigh(x)
Computes the eigenvalues and eigenvectors of a complex Hermitian.
Arguments
(..., M, M)
.Returns
(..., M)
containing
eigenvalues and a tensor of shape (..., M, M)
containing eigenvectors.inv
functionkeras.ops.inv(x)
Computes the inverse of a square tensor.
Arguments
(..., M, M)
.Returns
A tensor of shape (..., M, M)
representing the inverse of x
.
logdet
functionkeras.ops.logdet(x)
Computes log of the determinant of a hermitian positive definite matrix.
Arguments
Returns
The natural log of the determinant of matrix.
lstsq
functionkeras.ops.lstsq(a, b, rcond=None)
Return the least-squares solution to a linear matrix equation.
Computes the vector x that approximately solves the equation
a @ x = b
. The equation may be under-, well-, or over-determined
(i.e., the number of linearly independent rows of a can be less than,
equal to, or greater than its number of linearly independent columns).
If a is square and of full rank, then x
(but for round-off error)
is the exact solution of the equation. Else, x
minimizes the
L2 norm of b - a * x
.
If there are multiple minimizing solutions, the one with the smallest L2 norm is returned.
Arguments
(M, N)
.(M,)
or (M, K)
.
If b
is two-dimensional, the least-squares solution
is calculated for each of the K columns of b
.a
.
For the purposes of rank determination,
singular values are treated as zero if they are
smaller than rcond times the largest
singular value of a
.Returns
Tensor with shape (N,)
or (N, K)
containing
the least-squares solutions.
NOTE: The output differs from numpy.linalg.lstsq
.
NumPy returns a tuple with four elements, the first of which
being the least-squares solutions and the others
being essentially never used.
Keras only returns the first value. This is done both
to ensure consistency across backends (which cannot be achieved
for the other values) and to simplify the API.
lu_factor
functionkeras.ops.lu_factor(x)
Computes the lower-upper decomposition of a square matrix.
Arguments
(..., M, M)
.Returns
(..., M, M)
containing the
lower and upper triangular matrices and a tensor of shape (..., M)
containing the pivots.norm
functionkeras.ops.norm(x, ord=None, axis=None, keepdims=False)
Matrix or vector norm.
This function is able to return one of eight different matrix norms, or one
of an infinite number of vector norms (described below), depending on the
value of the ord
parameter.
Arguments
None
.axis
is an integer, it specifies the axis of x
along which
to compute the vector norms. If axis
is a 2-tuple, it specifies
the axes that hold 2-D matrices, and the matrix norms of these
matrices are computed.True
, the axes which are reduced are left
in the result as dimensions with size one.Note:
For values of ord < 1
, the result is, strictly speaking, not a
mathematical 'norm', but it may still be useful for various numerical
purposes. The following norms can be calculated:
- For matrices:
- ord=None
: Frobenius norm
- ord="fro"
: Frobenius norm
- ord="nuc"
: nuclear norm
- ord=np.inf
: max(sum(abs(x), axis=1))
- ord=-np.inf
: min(sum(abs(x), axis=1))
- ord=0
: not supported
- ord=1
: max(sum(abs(x), axis=0))
- ord=-1
: min(sum(abs(x), axis=0))
- ord=2
: 2-norm (largest sing. value)
- ord=-2
: smallest singular value
- other: not supported
- For vectors:
- ord=None
: 2-norm
- ord="fro"
: not supported
- ord="nuc"
: not supported
- ord=np.inf
: max(abs(x))
- ord=-np.inf
: min(abs(x))
- ord=0
: sum(x != 0)
- ord=1
: as below
- ord=-1
: as below
- ord=2
: as below
- ord=-2
: as below
- other: sum(abs(x)**ord)**(1./ord)
Returns
Norm of the matrix or vector(s).
Example
>>> x = keras.ops.reshape(keras.ops.arange(9, dtype="float32") - 4, (3, 3))
>>> keras.ops.linalg.norm(x)
7.7459664
qr
functionkeras.ops.qr(x, mode="reduced")
Computes the QR decomposition of a tensor.
Arguments
(..., M, N)
.Returns
A tuple containing two tensors. The first tensor of shape (..., M, K)
is the orthogonal matrix q
and the second tensor of shape
(..., K, N)
is the upper triangular matrix r
, where K = min(M, N)
.
Example
>>> x = keras.ops.convert_to_tensor([[1., 2.], [3., 4.], [5., 6.]])
>>> q, r = qr(x)
>>> print(q)
array([[-0.16903079 0.897085]
[-0.5070925 0.2760267 ]
[-0.8451542 -0.34503305]], shape=(3, 2), dtype=float32)
solve
functionkeras.ops.solve(a, b)
Solves a linear system of equations given by a x = b
.
Arguments
(..., M, M)
representing the coefficients matrix.(..., M)
or (..., M, N)
represeting the
right-hand side or "dependent variable" matrix.Returns
A tensor of shape (..., M)
or (..., M, N)
representing the solution
of the linear system. Returned shape is identical to b
.
solve_triangular
functionkeras.ops.solve_triangular(a, b, lower=False)
Solves a linear system of equations given by a x = b
.
Arguments
(..., M, M)
representing the coefficients matrix.(..., M)
or (..., M, N)
represeting the
right-hand side or "dependent variable" matrix.Returns
A tensor of shape (..., M)
or (..., M, N)
representing the solution
of the linear system. Returned shape is identical to b
.
svd
functionkeras.ops.svd(x, full_matrices=True, compute_uv=True)
Computes the singular value decomposition of a matrix.
Arguments
(..., M, N)
.Returns
(..., M, M)
containing the
left singular vectors, a tensor of shape (..., M, N)
containing the
singular values and a tensor of shape (..., N, N)
containing the
right singular vectors.