SSDs are commonplace now; Amazon EBS is backed by SSDs, and hence most of the cloud databases now also run on SSDs (Heroku PostgreSQL, etc.). Databases and related architectures were traditionally designed with the idea the random access is bad - this is no longer the case with SSDs.
How do SSDs effect the following?
- Database design - DBs are designed to minimize disk seeks (WAL, B-trees). How do SSDs change the internals and tuning of a DB design?
- Application development - The working assumption has always been that (a) You want to server users request from memory, not DB, and (2) that access to DB is IO bound. With SSDs, retrieving data from the DB can be fast enough, and DB access is often network bound. Does this reduce the need for in-memory databases? Obviously you still want to pre-compute expensive operations, but you can potentially just store them in a DB
- Specialized Databases - There're quite a few DBs that do things that relational DBs are suppose to be bad at (partially because of random data access). One such example are graph DBs(Neo4j) that store nodes and adjacency-lists on disk justin a compact way. Are these databases as useful if we can deploy a RDBMS on SSDs and not worry about random access?