An event store can be nothing more than a journal of events that is replayed, in full, to regenerate an service's original state. If you use a compacted topic in Kafka, you can minimise the restore time (a compacted topic just drops old events for the same key). This is fine for runtime state.
There are a number of options for facilitating queries. If you don't mind getting into the whole KStreams thing, the simplest is to materialise a queryable view in a KTable or State Store. This is a database (it's using RocksDB behind the scenes) constructed inside your service. It acts as a disk-backed cache over the data in the backing log. This has the useful property that the backing stream can be shared by many services, but the materialized view is owned entirely by each service.
More generally, a good approach is to do the simplest thing that will work, then evolve it. Try to keep services stateless and event driven. Pull in KTables or states stores if your requirements necessitate stateful elements. If your data requirements grow, look to branch out into an independent database. If you started with a kafka-backed store you can typically migrate the data relatively easily with the Connect api (although your logic may be affected).
One trick worth noting for this type of implementation is to avoid synthesising request-response channels between services. Instead follow and Event Driven Architecture where you build up a shared narrative of events. Martin Fowler has a good write up on this from a while back. He calls it Event Collaboration.