It depends.
In any case, you need to determine where your bottleneck is. Load testing is your friend here.
Don't go with a certain type of architecture just because it's popular or because it looks neat in diagrams. Go with the simplest thing that works, and then let the requirements and test results tell you where to go next.
This way you'll make the most sensible decisions based on what is actually needed from a business perspective. It's an obvious but commonly overlooked best practice.
If you and your clients are using OData and your business logic layer only takes care of basic things like validation, then most of the times you'll have a single HTTP call to your API for each requested action/function by the client. The OData query will contain information about which properties/entities to GET, POST or PUT. Consequently, you are never sending more information over the wire than necessary so the request can be passed to your data layer as-is.
This changes when (any or all):
- You have large models in a more simple, CRUD-like application.
- You have complex (long-running) logic in your business logic layer
- Your business logic / data layer needs data from more than one source, for example external services that enrich product information.
Keep in mind that the performance benefit from asynchronous processing only comes if requests are actually executed simultaneously. Your API needs to return exactly the same result if a certain action or function is split up into multiple requests further down the chain, compared to when it is not split up at all.
To be able to say something useful about this I would need more information about the degree of idem-potency and interdependence of the API calls to your BLL and DAL.
Having said all that, performance may not even be your first concern here. If your primary objective is to have a highly extensible, pluggable architecture where different components can be developed independently of each other, you generally have two flavors:
An OWIN-like approach with a pipeline. You can easily plug additional middleware into the pipeline (even from external assemblies) and whether or not you split up the JSON in any part of the communication chain depends on whether or not there is a performance benefit in doing so.
A full-blown SOA where each component is a webservice by itself, also referred to as "microservices". You might go with this approach if your data comes from different (internal and/or external) sources. This gives you more flexibility when one of those sources updates its specification - you would only need to update and re-deploy that particular API.
But, and I cannot stress this enough: start with the simplest thing possible and from there try to find out what you actually need.