1

Here is what I am trying to achieve, I have an application that is mining bitcoins. I wish to improve its functionality by introducing a remote machine which connects to the server and the server assigns some work to it.

On a single machine, I initially spawn all the processes and store their ids (of actors), then loop through the Pids and assign the mining job to them. Here each of the actor's mines and tries to find the target hash that satisfies the difficulty.

Here is the code for the same:

-module(server).
-export([start/2, spawnMiners/4, mineCoins/3]).

start(LeadingZeroes, InputString) ->
    MinersPIDList = spawnMiners(1000, [], LeadingZeroes, InputString),
    mineCoins(MinersPIDList, LeadingZeroes, InputString).

spawnMiners(0, PIDs, LeadingZeroes, InputString) -> PIDs;

spawnMiners(NumberOfMiners, PIDs, LeadingZeroes, InputString) ->
    PID = spawn(miner, findTargetHash, [LeadingZeroes, InputString]),
    spawnMiners(NumberOfMiners - 1, [PID|PIDs], LeadingZeroes, InputString).

mineCoins([], LeadingZeroes, InputString) -> io:format("");
mineCoins([PID|PIDs], LeadingZeroes, InputString) ->
    PID ! {self(), {mine}},
    receive
        {found, InputWithNonce, GeneratedHash} ->
            io:format(“Found something!”);
        {not_found} ->
            io:format("");
        Other ->
            io:fwrite("In Other!")
    end,
mineCoins(PIDs, LeadingZeroes, InputString).

Here, findTargetHash(LeadingZeroes, InputString) is the task miner/ process/ actor will run and return the result if found. So if a 1000 processes are being invoked, all of these are processed on the same machine.

This is what I wish to achieve next: Imagine a new machine with an IP address is connected to this server and after connecting, the server divides these 1000 processes between this machine and the remote machine (500 each, for literally performing distributed computing). I do not know where to start and how to achieve this. Any leads will be appreciated. (Maybe something to do with gen_tcp())

Rishab Parmar
  • 369
  • 3
  • 16
  • Does [this](https://stackoverflow.com/a/5135876/3457068) answer your question? If not, could you clarify what you mean by "divides"? Can the miners be killed+(re)started (on any machine) as part of such step? – pottu Sep 20 '22 at 13:36
  • Not really, by divide, I mean that if there are 1000 processes to be worked on, then the program should "distribute" the work among all the available nodes equally, so both the nodes get 500 processes each assuming there are two machines – Rishab Parmar Sep 21 '22 at 14:41
  • Right, and is it not an acceptable/possible solution to save the 500 processes' state, kill them, and restart them with their saved state on the new node? If so, why? – pottu Sep 22 '22 at 06:43
  • Umm, this would just add more work, first save the state, kill them, and then reinstate these states on the machines? There should be a seemingly easy way to do this, like maybe nodes(), or slaves() in erlang...looking for something on these terms. Its not distributes but just a work around. – Rishab Parmar Sep 22 '22 at 22:38

0 Answers0