Across our database, many crates expose both lib.rs
, and main.rs
, effectively forcing the binary to be an "external" user of its own crate.
The crate usually exposes a configuration and a set of public functions that consume the configuration in order to spawn specific workers. A typical setup may look like this:
# Cargo.toml
[package]
name = "foo"
version = "0.1.0"
edition = "2021"
/// src/config.rs
#[derive(Debug)]
pub struct AppConfig {
pub dead_code: String,
}
/// src/lib.rs
pub mod config;
pub fn run(config: config::AppConfig) {
println!("{config:?}")
}
/// src/main.rs
use foo::config::AppConfig;
use foo::run;
fn main() {
// config is loaded from the file in the actual code
let config = AppConfig{ dead_code: "never read".into() };
run(config)
}
For such a setting dead code analysis fails to detect unused fields in the configuration since it is seen as a part of public API, and if it is not read by the crate itself, it may still be used by external users. But this is not the case for us and it would be nice to somehow mitigate this and get dead code checks.
For now, I see three possible ways to fix this:
- Pack contents of
main()
and shove them into a public function in the library, then makeAppConfig
private or crate-public. This works but seems like a binary becomes an unnecessary part of a crate if all it does is import and call a single function without parameters. - Make
AppConfig
private/crate-public and expose a wrapper around it, that allows loading config and then use the public wrapper as a parameter for publicly facing functions. - Add
use config
inmain.rs
and usecrate::config
instead offoo::config
. This seems to be the way, but when I tried to follow this lead, I quickly found out that there are a lot of modules that will have to be added as modules to the binary. In addition to that, there seemed to be conflicts in what touse
between binary and library builds if some of the modules were exposed to the library but not to the binary. So this requires me to keep track of all the modules in two places now, instead of one, which seems error-prone.
For now, I am going with (2) but would like some insight on if this is the correct way to handle such cases, there are already quite a few places where we haven't detected dead code in time because it was in "public" API exposed only to the binary in the same crate. Maybe there's some clippy-specific way of marking the API as private or at least treated as such?
The question seems to be something of the opposite to Dead code warning with multiple binaries?