7

I have a Heroku app that has a single process. I'm trying to change it so that it has several worker processes in a dedicated queue to handle incoming webhooks. To do so, I am using a Node.JS backend with the Bull and Throng packages, which use Redis. All of this is deployed on Docker.

I've found various tutorials that cover some of this combination, but not all of it so I'm not sure how to continue. When I spin up Docker, the main server runs, but when the worker process tries to start, it just logs Killed, which isn't that detailed of an error message.

Most of the information I found is here

My worker process file is worker.ts:

import { bullOptions, RedisData } from '../database/redis';
import throng from 'throng';
import { Webhooks } from '@octokit/webhooks';
import config from '../config/main';
import { configureWebhooks } from '../lib/github/webhooks';
import Bull from 'bull';

// Spin up multiple processes to handle jobs to take advantage of more CPU cores
// See: https://devcenter.heroku.com/articles/node-concurrency for more info
const workers = 2;

// The maximum number of jobs each worker should process at once. This will need
// to be tuned for your application. If each job is mostly waiting on network
// responses it can be much higher. If each job is CPU-intensive, it might need
// to be much lower.
const maxJobsPerWorker = 50;

const webhooks = new Webhooks({
  secret: config.githubApp.webhookSecret,
});

configureWebhooks(webhooks);

async function startWorkers() {
  console.log('starting workers...');

  const queue = new Bull<RedisData>('work', bullOptions);

  try {
    await queue.process(maxJobsPerWorker, async (job) => {
      console.log('processing...');
      try {
        await webhooks.verifyAndReceive(job.data);
      } catch (e) {
        console.error(e);
      }

      return job.finished();
    });
  } catch (e) {
    console.error(`Error processing worker`, e);
  }
}

throng({ workers: workers, start: startWorkers });

In my main server, I have the file Redis.ts:

import Bull, { QueueOptions } from 'bull';
import { EmitterWebhookEvent } from '@octokit/webhooks';

export const bullOptions: QueueOptions = {
  redis: {
    port: 6379,
    host: 'cache',
    tls: {
      rejectUnauthorized: false,
    },
    connectTimeout: 30_000,
  },
};

export type RedisData = EmitterWebhookEvent & { signature: string };

let githubWebhooksQueue: Bull.Queue<RedisData> | undefined = undefined;

export async function addToGithubQueue(data: RedisData) {
  try {
    await githubWebhooksQueue?.add(data);
  } catch (e) {
    console.error(e);
  }
}

export function connectToRedis() {
  githubWebhooksQueue = new Bull<RedisData>('work', bullOptions);
}

(Note: I invoke connectToRedis() before the worker process begins)

My dockerfile is

# We can change the version of ndoe by replacing `lts` to anything found here: https://hub.docker.com/_/node
FROM node:lts

ENV PORT=80

WORKDIR /usr/src/app

# Install dependencies
COPY package*.json ./
COPY yarn.lock ./
RUN yarn
RUN yarn global add npm-run-all

# Bundle app source
COPY . .

# Expose the web port
EXPOSE 80
EXPOSE 9229
EXPOSE 6379

CMD npm-run-all --parallel start start-notification-server start-github-server

and my docker-compose.yml is

version: '3.7'

services:
  redis:
    image: redis
    container_name: cache
    expose:
      - 6379

  api:
    links:
      - redis
    image: instantish/api:latest
    environment:
      REDIS_URL: redis://cache
    command: npm-run-all --parallel dev-debug start-notification-server-dev start-github-server-dev
    depends_on:
      - mongo
    env_file:
      - api/.env
      - api/flags.env
    ports:
      - 2000:80
      - 9229:9229
      - 6379:6379
    volumes:
      # Activate if you want your local changes to update the container
      - ./api:/usr/src/app:cached

Finally, the relevant NPM scripts for my project are

"dev-debug": "nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js\"",

"start-github-server-dev": "MONGOOSE_DEBUG=false nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"ts-node ./scripts/worker.ts\"",

The docker container logs are:

> instantish@1.0.0 start-github-server-dev /usr/src/app
> MONGOOSE_DEBUG=false nodemon --watch "**/**" --ext "js,ts,json" --exec "ts-node ./scripts/worker.ts"
> instantish@1.0.0 dev-debug /usr/src/app
> nodemon --watch "**/**" --ext "js,ts,json" --exec "node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js"
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `ts-node ./scripts/worker.ts`
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js`
worker.ts
Killed
[nodemon] app crashed - waiting for file changes before starting...
Richard Robinson
  • 867
  • 1
  • 11
  • 38
  • Could you add more details about the error you are seeing? – Ayzrian May 13 '21 at 12:16
  • @Ayzrian ok I edited the message with the logs – Richard Robinson May 13 '21 at 15:30
  • @RichardRobinson rather than asking more questions, if I run the above code, is it enough, or shall I add or manage something more. I am trying to simulate the issue, and identify the cause. Hope I don't bother you. – Dipak May 15 '21 at 14:23
  • according to : https://stackoverflow.com/questions/37486631/nodemon-app-crashed-waiting-for-file-changes-before-starting you already have some process which listens on port 9229. So you need to stop/kill it before running your containers – ofirule May 15 '21 at 20:17

0 Answers0