I'm building a parsing engine, which will take JSON strings as inputs, parse the JSON string, and output the parsed JSON string. I'd like the parsing engine to run as either a daemon or service, so I can deploy it using Docker. It needs to be extermely high performance, because it will parse high volumes of data.
I understand I could just have a script, which launches sed as a background process. But, it seems like launching and re-launching a process will incur overhead, thus reducing performance. I'm thinking running sed as a daemon or service might allow me the convenience of using an existing, and well vetted tool while maximizing system performance.
Additionally, if awk or another existing tool would be better suited to this purpose, I am open to other options. But, I'd like it to be a well vetted Linux/Unix tool if possible, just to avoid re-inventing the wheel.
I read this SO question. And this one regarding running emacs as a daemon. But, neither seem to work for sed.
I have also considered piping stdin to sed in a daemon, but not sure if that is the best approach.
UPDATE The key thing I am trying to ask is this: How can I run either sed, awk, or jq as a daemon, so that I can pass many strings to it without incurring the overhead of launching a new process?