AWS managed service is a bit of a white-glove service so I'm not familiar how standardised their offering and guard rails are. There's a few different parts that come into play
- regions that host your landing zone shared infrastructure, e.g. logging account, control tower, AWS SSO etc.
- regions that host shared infrastructure that you deploy into every account managed under the landing zone, e.g. a default VPC (peered to a TGW)
- regions that are allowed to be used in managed accounts, e.g. because an SCP on the OU forbids everything else
From my understanding it seems that one AMS multi-account landing zone always operates in a single region for all three of those.
May be a fine restriction for starting out, but my experience with large landing zones (> 500 Accounts) is that you start keeping 1. and 2. locked to a single region, but keep 3. restricted only for governance/compliance reasons (e.g. EU only). That gives teams the freedom to leverage AWS regions the way that makes the most sense to their applications like lambda edge functions, regional s3 buckets etc.
Of course, applications that do need on-premise connectivity have a strong gravity to the region hosting transit gateway. Depending on how your on-prem looks like, larger orgs can later add multiple landing zones or even preferably use a modular landing zone approach with "TGW peerings as a service".