It highly depends on the implementation, but if you implement database writing in a fashion that doesn't block too much then there isn't much different performance wise.
There is, however, a pretty huge structural difference. Scrapy's design philosphy highly encourages using middlewares and pipelines for the sake of keeping spiders clean and understandable.
In other words - spider bit should crawl data, middlewares should modify requests and responses and pipelines should pipe returned data through some external logic (like put it into a database or to a file).
Regarding your follow up question:
how and when is pipelines.py invoked? What happens after the yield statement?
Take a look at Architectual Overview documentation page and if you'd like to dig deeper you'd have to understand twisted
asyncronious framework since scrapy is just a big, smart framework around it.