I am currently developing a news feed android app. I try to design my app according to the principles of clean architecture.
In the data layer I am using the repository pattern as a facade for the diferent data sources: remote data from an API (https://newsapi.org/) , local data from an DB (Realm or SQLite) as well as some in-memory cache.
In my domain layer I have defined some immutable model classes (Article, NewsSource, etc.) which are being used by the domain layer as well as the presentation layer (no need for extra model classes in the presentation layer in my opinion).
Does it make sense to use different model classes for the remote data source as well as for the local data source?
E.g. The remote data source uses Retrofit to make API calls and the models need to be annotated in order to be parsed by GSON.
data class RemoteArticleModel(
@SerializedName("title") val title: String,
@SerializedName("urlToImage") val urlToImage: String,
@SerializedName("url") val url: String)
The models for the local data source also may have to fulfill some certain contract like models in a Realm DB need to extend RealmObject.
open class Dog : RealmObject() {
var name: String? = null
@LinkingObjects("dog")
val owners: RealmResults<Person>? = null
}
Obviously, I don´t want my domain models to be 'polluted' by any data source specific contract (annotations, RealmObject inheritance, etc.). So I thought it would make sense to use different models for different data sources and the repository handles the mapping between them.
E.g. We want to fetch all articles from the remote API, store them in the local DB and return them to the domain layer.
Flow would be like:
Remote data source makes http request to news api and retrieves a list of RemoteArticleModel
´s. The repository would map these models to a Domain specific article model (Article
). Then these would be mapped to DB models (e.g. RealmArticleModel
) and inserted into the DB. Finally the list of Article
´s would be returned to the caller.
Two questions arise: The above example shows how many allocations there would be using this approach. For every article that is going to be downloaded and inserted into the DB, three models would be created in that process. Would that be overkill?
Also, I know that the data layer should use different model classes than the domain layer (inner layer should no nothing about outer layer). But how would that make sense in the above example. I would already have two different model classes for the two different data sources. Adding a third one that´s being used as a 'mediator' model by the data-layer/repository to handle mapping to other models (remote, local, domain) would add even more allocations.
So should the data layer know nothing about domain models and let the domain do the mapping from a data layer model to a domain layer model?
Should there be a generic model used only by the repository/data-layer?
Thank, I really appreciate any help from more experienced developers :)