1

I am new to the Vitess and we have deployed Vitess in Kubernetes using Helm Charts and exposed VTGate through node Port and able to connect MYSQL Work bench using the exposed VTGate Ip Address. When we inserted the records using MySql Work Bench, Records are not scattering across the Shards.

  * 'Payrolls' is the KeySpace and we created two shards as Payrolls -80,Payrolls 80-.

  * Table Schema as below

     1. TenantList
        (
           TenantID     INT,
           NAme      VARCHAR(200)
        )  


     2.  PayrollMaster
        (
          PayrollID   INT PRIMARY KEY,
          HHAID       INT FOREIGN KEY TO TenantList(TenantID)
        )  

     3. PayrollDetails
       (
         PayrollDetailID    INT PRIMARY KEY,
         PayrollID          INT FOREIGN KEY TO PayrollMaster(PayrollID),
         HHAID              INT FOREIGN KEY TO TenantList(TenantID)
       )

   *   VSchema is as below
      {
       "sharded": true,
       "vindexes": {
        "hash": {
         "type": "hash"
        }
     },
    "tables": {
     "payrollmaster": {
     "column_vindexes": 
      [
       {
         "column": "HHA",
         "name": "hash"
       }
      ],
      "auto_increment": {
       "column": "PayrollInternalID",
       "sequence": "payrollmaster_seq"
      }
    },
    "payrolldetails": {
     "column_vindexes": 
     [
       {
         "column": "HHA",
         "name": "hash"
       }
     ],
     "auto_increment": {
       "column": "PayrollDetailInternalID",
       "sequence": "payrolldetails_seq"
     }
   }
 }
}


    We are inserting the Records like below
      use Payrolls;
      insert into the TenantList,PayrollMaster,PayrollDetails

We are expecting the Records to be scattered across the Shards but table has zero rows in the Shards.

Key Space and Shards were created as mentioned in the screenshot

is there any issue with the way we are trying.

Thanks

Chethan SR
  • 77
  • 1
  • 9

1 Answers1

3

It looks like you want payrolls to be sharded. If so, you shouldn't create a shard "0". You should only specify shards "-80" and "80-"

Sugu Sougoumarane
  • 406
  • 1
  • 3
  • 7
  • 1
    Thanks @sugu Sougoumarane. We have followed the vitess/302_new_shards.yaml to create new shards and shard was created for keyspace/0. What is the advantage of creating a shard for keyspace/0. if we select a key which is a foreign key to all the tables then all the referenced tables will shard automatically? – Chethan SR Mar 15 '20 at 07:26
  • The sharding scheme is specified in the vschema. The distribution of rows in various shards is decided by the primary vindex you choose: https://vitess.io/docs/reference/vschema/. Vitess doesn't recommend or support cross-shard foreign keys because of efficiency reasons. You should design your vschema to keep FK relationships within shard. – Sugu Sougoumarane Mar 16 '20 at 15:28
  • 1
    I have Updated the VSchema. Can you Please check. we have mentioned the vindexes but data was not sharded automatically. – Chethan SR Mar 17 '20 at 10:34
  • @SuguSougoumarane In the helm example of planet scale shard 0 also is there https://github.com/planetscale/vitess/blob/master/examples/helm/302_new_shards.yaml In this case queries to direct VTGate won't work? – Nithya Mar 17 '20 at 15:59
  • Looks like there's a typo in vschema? The primary vindex for both tables is "HHA", but I don't see the columns in the table schema. In the example shard 0 exists first, then we reshard that into -80 and 80. After resharding, shard 0 will be deleted. – Sugu Sougoumarane Mar 17 '20 at 17:12
  • @SuguSougoumarane we deleted the Shard 0 and now we are getting the error while connecting mysql from work bench or Application. Error : "No partition found for tabletType master in keyspace payrolls" . are we missing anything? – Chethan SR Mar 26 '20 at 12:53
  • @ChethanSR If you add shards, you have to go through the resarding workflow to migrate data from the original shard to the new ones. You can delete the source shard only after you cutover the traffic. The example shows how to do it. – Sugu Sougoumarane Mar 26 '20 at 16:20
  • Thanks for the reply @SuguSougoumarane. We followed the below steps to delete the shard 0. First we did SplitClone -min_healthy_rdonly_tablets=1 payrolls/0 then migrated like MigrateServedTypes payrolls/0 rdonly MigrateServedTypes payrolls/0 replica MigrateServedTypes payrolls/0 master DeleteShard -recursive payrolls/0. are we missing anything? Thanks in advance – Chethan SR Mar 27 '20 at 05:57
  • That looks right. I'm wondering if one of those steps failed. We've rehearsed those steps multiple times. There are also online demos that I presented at Kubecon that show that they work. If you have the logs, can you see if there are errors? – Sugu Sougoumarane Mar 27 '20 at 15:57