0

It's documented that separate redis instances map to separate CPU cores. If I have 8 redis instances running on a Debian/Ubuntu machine with 8 cores, all of them would map to a core each.

1) What happens if I scale this machine down to 4 cores?

2) Do the changes happen automatically (by default), or is some explicit configuration involved?

3) Is there any way to control the behavior? If so, to what extent?

Would love to understand the technicals behind this, and an illustrative example is most welcome. I run an app hosted in the cloud which uses redis as a back-end. Scaling up (and down) the machine's CPU cores is one of the things I have to do, but I'd like to know what I'm first getting into.

Thanks in advance!

Hassan Baig
  • 15,055
  • 27
  • 102
  • 205
  • 2
    Where do you see this documented? I don't see any evidence that it's true. – hobbs Feb 02 '19 at 11:23
  • @hobbs: for example, it's heavily implied in this question and some of its answers (which I can separately point out if you want): https://stackoverflow.com/questions/16221563/whats-the-point-of-multiple-redis-databases – Hassan Baig Feb 02 '19 at 13:33

2 Answers2

2

There is no magic. Since redis is single-threaded, a single instance of redis will only occupy a single core at once. Running multiple instances creates the possibility that more than one of them will be executing at once, on different cores (if you have them). How this is done is left entirely up to the operating system. redis itself doesn't do anything to "map" instances to specific cores.

In practice, it's possible that running 8 instances on 8 cores might give you something that looks like a direct mapping of instances to cores, since a smart OS will spread processes across cores (to maximize available resources), and should show some preference for running a process on the same core that it recently vacated (to make best use of cache). But at best, this is only true for the simple case of a 1:1 mapping, with no other processes on the system, all processes equally loaded, no influence from network drivers, etc.

In the general case, all you can say is that the OS will decide how to give CPU time to all of the instances that you run, and it will probably do a pretty good job, because the scheduling parts of the OS were written by people who know what they're doing.

hobbs
  • 223,387
  • 19
  • 210
  • 288
2

Redis is a (mostly) single-threaded process, which means that an instance of the server will use a single CPU core.

The server process is mapped to a core by the operating system - that's one of the main tasks that an OS is in charge of. To reiterate, assigning resources, including CPU, is an OS decision and a very complex one at that (i.e. try reading the code of the kernel's scheduler ;)).

If I have 8 redis instances running on a Debian/Ubuntu machine with 8 cores, all of them would map to a core each.

Perhaps, that's up to the OS' discretion. There is no guarantee that every instance will get a unique core, and it is possible that one core may be used by several instances.

1) What happens if I scale this machine down to 4 cores?

Scaling down like this means a restart. Once the Redis servers are restarted, the OS will assign them with the available cores.

2) Do the changes happen automatically (by default), or is some explicit configuration involved?

There are no changes involved - every process, Redis or not, gets a core. Cores are shared between processes, with the OS orchestrating the entire thing.

3) Is there any way to control the behavior? If so, to what extent?

Yes, most operating systems provide interfaces for controlling the allocation of resources. Specifically, the taskset Linux command can be used to set or get a process's CPU affinity.

Note: you should leave CPU affinity setting to the OS - it is supposed to be quite good at that. Instead, make sure that you provision your server correctly for the load.

Itamar Haber
  • 47,336
  • 7
  • 91
  • 117