-4

I am working on a project built with Laravel and migrations are already built. The problem, though, is that a lot of tables are built in the regular way (too many columns). I am planning to divide them to multiple tables and connect with relationships.

For instance, this is a users table (there are much more columns there):

id username language type name photo about phone email password time_zone

What I am planning to do, is to create something similar to WordPress (speaking about table schema, not the code, of course), so there would be tables:

users, 
user_metas // or user_options,
posts, 
post_metas // or post_options

Each *_metas table will be related to the main model by hasMany relationship (user hasMany options), or even make a single options table with OneToMany relation to User + Post.

This approach, hopefully, will simplify working with DB in some ways, as well as make it simplier to add new options, but, on the other hand, will make it more heavy for the server (have to perform more SQL queries) and will open possibility to more bugs (more code === more bugs). Also, I am not the fan of putting all of the info to one table, since if there are relations, why not to use them?

Have to note, that user and post will have a lot of options, with ability to add new ones (and different posts won't have the same set of options).

So, what approach is better here?

Shadow
  • 33,525
  • 10
  • 51
  • 64
Max Krizh
  • 585
  • 3
  • 7
  • 34

2 Answers2

1

I think that having tons of columns in a table is alright if it doesn't violate any normalization forms (you can google it, its like best practices when designing a database).

I would only add relationships if that is required, for example in your DB you want to map multiple phone numbers to one user, so then you add a one-to-many relationship (one user, many phones). Since I'm pretty sure that join operation which you would need to utilize on your read operations doesn't come for free.

For the following part I'm assuming your metas would be defined something among these lines:

user_id; key; value

If you would create something like metas, you would have problem in how to define value datatype. In order to support all possible types i guess you would pick VARCHAR datatype. But then for example if you would store a 32bit number there, you would be wasting server disk space. Since int32 is 4 bytes and -2147483648 is 11 characters so 11 bytes.

Note: most relational databases offer dynamic columns or JSON datatype, but that should only be used when you want to store properties (size, color, weight etc etc) that are not defined at designing stage, and their keys can change from row to row. For example you could see: https://mariadb.com/kb/en/dynamic-columns/

Kudzė
  • 153
  • 11
1

life is a tradeoff! your challenge is too! but how? for example, there is a topic named normalization in designing systems that tries to break one big table (that contains all system features together) into smaller ones (1NF Level), then will do it again (all base on determining rules) till 2NF - 3NF - BCNF and ... although there are more levels after BCNF, it is said that normalization to this level is enough, cause more and more break downs will bring new challenges and disadvantages with itself, for example, more relationships and more queries as you know, and obviously have some advantages: less redundancy and less complexity, you must choose the solutions based on your vision and your desired functionalities. just consider that there is no pre-defined best practice approach about normalization and whatever you choose has its adv/disadvantages. in your situation, if users have multiple metadata, if individual access to that data is probable, if users identical information fetches frequently, if you want to have user-profiling microservices later if users meta data are not required and ..., so it is rational to the breaks user database.

behzad m salehi
  • 1,038
  • 9
  • 23