21

Can anyone direct me to the section of numpy manual where i can get functions to accomplish root mean square calculations ... (i know this can be accomplished using np.mean and np.abs .. isn't there a built in ..if no why?? .. just curious ..no offense)

can anyone explain the complications of matrix and arrays (just in the following case):

U is a matrix(T-by-N,or u say T cross N) , Ue is another matrix(T-by-N) I define k as a numpy array

U[ind,:] is still matrix

in the following fashion k = np.array(U[ind,:])

when I print k or type k in ipython

it displays following

K = array ([[2,.3 .....
              ......
                9]])

You see the double square brackets (which makes it multi-dim i guess) which gives it the shape = (1,N)

but I can't assign it to array defined in this way

l = np.zeros(N)
shape = (,N) or perhaps (N,) something like that

l[:] = k[:]
error:
matrix dimensions incompatible

Is there a way to accomplish the vector assignment which I intend to do ... Please don't tell me do this l = k (that defeats the purpose ... I get different errors in program .. I know the reasons ..If you need I may attach the piece of code)

writing a loop is the dumb way .. which I'm using for the time being ...

I hope I was able to explain .. the problems I'm facing ..

regards ...

Joe Kington
  • 275,208
  • 71
  • 604
  • 463
fedvasu
  • 1,232
  • 3
  • 18
  • 38
  • 6
    In the future, please do not combine two questions together in the same post. It will make it easier for people to respond and for future users of the sight to find things. – JoshAdel Apr 10 '11 at 17:34
  • 1
    If you inspect the shape attribute of various arrays (e.g. `K.shape` or `l[:].shape`) you will see whether the dimensions of the arrays are incompatible, and it will give you clues about how you might correct the issue. – JoshAdel Apr 10 '11 at 17:37
  • 1
    Quite a verbose question, indeed. As pointed already, if you have two questions, ask two questions then. Anyway, just show your actual code and there's good change that you'll get constructive suggestion. Just, your current way to ask the (simple) questions makes them actually quite cumbersome. Thanks – eat Apr 10 '11 at 17:44
  • 1
    Could you please clarify exactly what type of RMS calculation you want to do (either by citing an equation or linking to the definition that you are using)? – JoshAdel Apr 10 '11 at 18:16
  • @JoshAdel thanx josh for your comments ... yeah from next time onwards ... i'll post one question at a time .. exactly what type of rms .. well i want to take a row of a matrix and substract it with an array, the resulting sequence i want to average (in mean squared sense) .. i know my question is a lot verbose i thought it would help me in expressing my question .. rather proved to be opposite of it! – fedvasu Apr 12 '11 at 09:58

7 Answers7

87

For the RMS, I think this is the clearest:

from numpy import mean, sqrt, square, arange
a = arange(10) # For example
rms = sqrt(mean(square(a)))

The code reads like you say it: "root-mean-square".

deprecated
  • 2,030
  • 16
  • 11
13

For rms, the fastest expression I have found for small x.size (~ 1024) and real x is:

def rms(x):
    return np.sqrt(x.dot(x)/x.size)

This seems to be around twice as fast as the linalg.norm version (ipython %timeit on a really old laptop).

If you want complex arrays handled more appropriately then this also would work:

def rms(x):
    return np.sqrt(np.vdot(x, x)/x.size)

However, this version is nearly as slow as the norm version and only works for flat arrays.

goodboy
  • 191
  • 1
  • 6
8

I don't know why it's not built in. I like

def rms(x, axis=None):
    return sqrt(mean(x**2, axis=axis))

If you have nans in your data, you can do

def nanrms(x, axis=None):
    return sqrt(nanmean(x**2, axis=axis))
Ben
  • 9,184
  • 1
  • 43
  • 56
  • 3
    I often work with complex data, in that case square isn't enough. You need something like `abs(x)**2` instead of just `x**2` – Eric C. Dec 18 '14 at 16:16
8

For the RMS, how about

norm(V)/sqrt(V.size)
Jérôme Verstrynge
  • 57,710
  • 92
  • 283
  • 453
Xingzhong
  • 173
  • 3
  • 6
  • 2
    upvote because it is concise but norm is from `np.linalg` instead of directly from `np` and it does not have an optional `axis` argument, useful to make the rms function more general – dashesy Sep 04 '14 at 17:22
  • 1
    @dashesy [`np.linalg.norm`](https://numpy.org/doc/stable/reference/generated/numpy.linalg.norm.html) does have an `axis` parameters since version 1.8.0. – a_guest Jul 21 '20 at 13:44
5

Try this:

U = np.zeros((N,N))
ind = 1
k = np.zeros(N)
k[:] = U[ind,:]
highBandWidth
  • 16,751
  • 20
  • 84
  • 131
1

I use this for RMS, all using NumPy, and let it also have an optional axis similar to other NumPy functions:

import numpy as np   
rms = lambda V, axis=None: np.sqrt(np.mean(np.square(V), axis))
dashesy
  • 2,596
  • 3
  • 45
  • 61
0

If you have complex vectors and are using pytorch, the vector norm is the fastest approach on CPU & GPU:

import torch
batch_size, length = 512, 4096
batch = torch.randn(batch_size, length, dtype=torch.complex64)
scale = 1 / torch.sqrt(torch.tensor(length))
rms_power = batch.norm(p=2, dim=-1, keepdim=True)
batch_rms = batch / (rms_power * scale)

Using batch vdot like goodboy's approach is 60% slower than above. Using naïve method similar to deprecated's approach is 85% slower than above.

Teque5
  • 424
  • 7
  • 7