Let me use an analogy here.
You're working on some homework alone in a large library. You go through it, problem by problem. When you're done with one problem, you move on to another. That's a single-threaded, single-process application.
You notice this is pretty slow, so you call over a couple of friends (spawning new threads). You start getting a lot more done, since you can work on several of the problems in parallel, and since you're all in the same room, you can talk to each other pretty easily (shared memory). Unfortunately, you only have one reference book, and have to keep passing it around (shared resources). This causes an argument when some of your group members need to work on similar problems at the same time (resource contention, deadlocks). Then there was that issue where two of your group members tried to write down conflicting answers at the same time and got into a fight (concurrency errors). That's multithreading with shared memory.
You realize that there's another copy of that textbook in a library across town. You send some of your friends over (forking a new process) with a copy of everything they've done so far (copying memory) to go work there. Now they can get a lot more done and don't fight with you so often (less resource contention) but this comes at a cost -- they can only talk to you over a cellphone (interprocess communication) so communicating questions and answers is pretty expensive. Furthermore, after a while their answers don't really resemble yours anymore unless you spend a lot of time keeping each other updated, which consumes a lot of your time (synchronization). That's multiple processes.