3

I am relatively experienced with many AWS services - but I do have a large gap around Aurora/RDS

I'm trying to create a multi-region multi-master (write replicas) setup

The purpose is to give low latency to users (if each read and write replica is in the user's region) and to give resilience (if there is a region outage, the users can have their requests routed to another region (the latency will be higher, but reduced service is better than no service))

I'm trying to learn about AWS Aurora and I've created a toy cluster to learn. It seems I can create a cluster that is served out of multiple regions (and Aurora replicates data between regions automatically). I've also read that it is possible to have a multi-master setup (in my toy cluster, it only had one write partition, I couldn't work out how to create another write partition in another region, which made me question if it's possible?)

Here is a diagram of what I'm thinking:

https://i.stack.imgur.com/bfigZ.jpg

Thank you in advance!

Alex Fanthome
  • 61
  • 1
  • 4

4 Answers4

7

The purpose is to give low latency to users (if each read and write replica is in the user's region)

I couldn't work out how to create another write partition in another region, which made me question if it's possible?

That is not possible (at least not currently) because of multi-master Aurora limitations.

  • all DB instances in a multi-master cluster must be in the same AWS Region.

and others such as

  • you can have a maximum of two DB instances in a multi-master cluster
  • You can't enable cross-Region replicas from multi-master clusters.

You can read more here


The best thing you can do in your scenario is to create single master and place read replicas into those additional regions (possibly with some caching in necessary).

Matus Dubrava
  • 13,637
  • 2
  • 38
  • 54
  • 1
    So I'd have to look into another solution... I guess a separate cluster for each region, and then use lambda/DMS to replicate between clusters manually (and resolve conflicts) The only time there would be conflicts is where users change region (either by themselves physically moving, or a region going down) - either way there shouldn't be many conflicts Are there any common patterns that I can follow here? Surely others have solved this problem? – Alex Fanthome Aug 03 '20 at 11:43
  • I am not sure if speed at such scale is of the utmost importance that the SQL based databases are the right choice. Other have solved this issue usually by turning into NoSQL DBs that are much easier to shard and scale. The issue is that you would need to be able to perform eventually consistent writes (due to cross region distance) and SQL databases are not designed to be used like that. For example, in DynamoDB, you can have global tables with multiple cross region master writers who can handle eventually consistent writes. – Matus Dubrava Aug 03 '20 at 12:00
1

As mentioned earlier it is not possible with Aurora.

However DynamoDB supports multi-active multi-region:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GlobalTables.html

RogerS
  • 1,322
  • 9
  • 11
1

As others have said, with Amazon Aurora, you cannot deploy multi-Region and multi-master. However you can deploy multi-Region using Aurora Global Database. Then one writer endpoint would be in one Region, while reader endpoints would be available in all the other Regions. Then you can also use write forwarding (assuming you are using the MySQL flavor of Aurora) in the read-only Regions. I know latency is a concern for you, so note the write actually goes back to the primary Region, so writes will incur that extra latency.

Seth E
  • 957
  • 3
  • 17
0

The purpose is to give low latency to users and to give resilience.

Then you can use Amazon Aurora Global Database. This solution provides replication from the Aurora storage layer across Regions. The reader endpoint in the secondary endpoint can be promoted in the event of a DR scenario to be the main database which takes on read/write responsibilities.

Aung Zan Baw
  • 392
  • 2
  • 8