For large matrices, yes the inverse is very inefficient. However, special properties, such as the matrix being lower triangular make the inverse much simpler to compute.
In numerical analysis, the most typical solution to Ax=b is LU factorization of A(LUx=b), then solving Ly = b for y and Ux = y for x.
For a more stable approach, consider using QR decomposition, where Q has the special property of Q^T*Q = I, so Rx = Q^Tb where R is upper triangular(only one back solve as opposed to a forward and back solve with LU).
Other special properties, such as the matrix being symmetric(Cholesky) or banded(Gauss) make the certain solvers better than the other.
As always be aware of floating point errors in your calculations.
Might I add that iterative solvers are also a popular way to solve systems. Conguate gradient method is the most used and works well for sparse matrices. Jacobi and Gauss-Seidel are good for matrices that are diagonally dominate and sparse.