0

I'm having trouble with a particular command line for my shell: base64 /dev/urandom | head -c 1000

As you can see the first process base64 writes never-ending characters into the pipe and head will read the first 1000 characters and then exit properly.

I know the pipe's fd are duplicated in every child processes and that if you close them in the parent they will not be closed in the child's causing the problem that I have. When closing the reading end of the pipe in the parent it will not send a SIGPIPE signal terminating the first process because it will still be open in the child.

So, basically, in case of a never-ending process, it will try to write into the pipe successfully because in the fork, the reading end of the pipe will never be closed.

So how can I successfully exit the first process when the second one has finished to read in the pipe, in other words, how can I share the SIGPIPE signal between children processes.

This code is the part where I do the piping, forking and waiting:

void            exec_process(t_shell *sh, t_job *job, int *iofile)
{
    t_parse *parse;

    parse = init_parse(sh, job->process->cmd);
    if (check_builtins(parse->argv[0]))
    {
        if (iofile[0] != 0)
            job->process->stdio[0] = iofile[0];
        if (iofile[1] != 1)
            job->process->stdio[1] = iofile[1];
        launch_builtin(sh, parse, job);
    }
    else if ((job->process->pid = fork()) == 0)
    {
        if (iofile[0] != 0)
            job->process->stdio[0] = iofile[0];
        if (iofile[1] != 1)
            job->process->stdio[1] = iofile[1];
        launch_bin(parse, job);
    }
    free_parse(&parse);
    if (iofile[0] != 0)
        close(iofile[0]);
    if (iofile[1] != 1)
        close(iofile[1]);
 }

static void     launch_process(t_shell *sh, t_job *job)
{
    t_process   *process;
    int         iofile[2];
    int         pipefd[2];
    int         i;

    process = job->process;
    iofile[0] = 0;
    i = get_num_process(job->process);
    while (job->process)
    {
        if (job->process->next)
        {
            pipe(pipefd);
            iofile[1] = pipefd[1];
        }
        else
            iofile[1] = 1;
        exec_process(sh, job, iofile);
        iofile[0] = pipefd[0];
        job->process = job->process->next;
     }
     job->process = process;
     wait_for_job(sh, job, i);
 }

I have one parent and 2 children that are only link to the parent.

Any suggestions? Thank you

Zethir
  • 29
  • 6

1 Answers1

1

each process must close the end of the pipe it doesn't use so that when other process terminates read or write call fails.

Nahuel Fouilleul
  • 18,726
  • 2
  • 31
  • 36
  • So, I should close the end that is not use in the fork of each child even though they are only copies of the parent pipe fds? EDIT: I never know which end of the pipe I'm using since it's a loop and it could ever be the writing or the reading end that is used. – Zethir Mar 13 '18 at 16:51
  • @Zethir. Absolutely. If the parent isn't going to read from the pipe (and it shouldn't!), then you should close that fd as soon as possible. – William Pursell Mar 13 '18 at 16:54
  • yes for the child if is reading the write end and the parent must close the read end – Nahuel Fouilleul Mar 13 '18 at 16:54
  • [see also why-should-you-close-a-pipe-in-linux](https://stackoverflow.com/questions/19265191/why-should-you-close-a-pipe-in-linux) – Nahuel Fouilleul Mar 13 '18 at 16:57
  • But I need to keep the reading end of the pipe for the second process so it can read from the pipe. I can't do that if it's closed? – Zethir Mar 13 '18 at 16:57
  • the pipe is inherited from the fork each process have it's own file descriptors – Nahuel Fouilleul Mar 13 '18 at 16:59
  • I think I got it, instead of having a parent (the shell) that has multiple independent child, I should have a parent that has a child that also has a child and so on... – Zethir Mar 13 '18 at 17:00
  • maybe not necessary but the count of process referencing a pipe end must be 0 so that it is really closed – Nahuel Fouilleul Mar 13 '18 at 17:03