## scipyx

SciPy is large library used everywhere in scientific computing. That's why breaking backwards-compatibility comes as a significant cost and is almost always avoided, even if the API of some methods is arguably lacking. This package provides drop-in wrappers "fixing" those.

npx does the same for NumPy.

If you have a fix for a SciPy method that can't go upstream for some reason, feel free to PR here.

#### Krylov methods

```
import numpy as np
import scipy.sparse
import scipyx as spx
# create tridiagonal (-1, 2, -1) matrix
n = 100
data = -np.ones((3, n))
data[1] = 2.0
A = scipy.sparse.spdiags(data, [-1, 0, 1], n, n)
A = A.tocsr()
b = np.ones(n)
sol, info = spx.cg(A, b, tol=1.0e-10)
sol, info = spx.minres(A, b, tol=1.0e-10)
sol, info = spx.gmres(A, b, tol=1.0e-10)
sol, info = spx.bicg(A, b, tol=1.0e-10)
sol, info = spx.bicgstab(A, b, tol=1.0e-10)
sol, info = spx.cgs(A, b, tol=1.0e-10)
sol, info = spx.qmr(A, b, tol=1.0e-10)
```

`sol`

is the solution of the linear system `A @ x = b`

(or `None`

if no convergence),

and `info`

contains some useful data, e.g., `info.resnorms`

. The solution `sol`

and all

callback `x`

have the shape of `x0`

/`b`

.

The methods are wrappers around SciPy's iterative

solvers.

Relevant issues:

#### Optimization

```
import scipyx as spx
def f(x):
return (x ** 2 - 2) ** 2
x0 = 1.5
out = spx.minimize(f, x0)
print(out.x)
x0 = -3.2
x, _ = spx.leastsq(f, x0)
print(x)
```

In scipyx, all intermediate values `x`

and the result from a minimization `out.x`

will

have the same shape as `x0`

. (In SciPy, they always have shape `(n,)`

, no matter the

input vector.)

Relevant issues:

#### Root-finding

```
import scipyx as spx
def f(x):
return x ** 2 - 2
a, b = spx.bisect(f, 0.0, 5.0, tol=1.0e-12)
a, b = spx.regula_falsi(f, 0.0, 5.0, tol=1.0e-12)
```

scipyx provides some basic nonlinear root-findings algorithms:

bisection and regula

falsi. They're not as fast-converging as

other methods, but are very robust

and work with almost any function.