If I've understood well, you want Kafka as the final backend were to store the data, not as the internal channel used by a Flume agent to communicate both source and sink. I mean, a Flume agent is basically compose of a source receiving data and building Flume events that are put into a channel in order a sink reads those events and do something with them (typically, persisting this data in a final backend). Thus, according yo your design, if you use Kafka as the internal channel, it will be that, an internal way of communicating the HTTP source and the HDFS sink; but it never will be accessible from outside the agent.
In order to meet your needs, you will need and agent such as:
http_source -----> memory_channel -----> HDFS_sink ------> HDFS
|
|----> memory_channel -----> Kafka_sink -----> Kafka
{.................Flume agent.....................} {backend}
Please observe the memory-based channels are the internal ones, they can be based on memory, or files, even in Kafka, but that Kafka channels will be different than the final Kafka you will be persisting the data and that will be accessible by your app.