1

I'm using SlashDB to layer a REST interface over MySQL back-end. Mostly, I define queries through the 'SQL Pass-thru' feature. We're using this system to record test data from various test station.

When sending test data into the database, it seems SlashDB chokes once the URL exceeds a certain length (about 2K worth of data). The error returned was '502', which is strange because URI too long normally returns a '414'. When I try the query directly in MySQL, there was no problem.

Here's the table definition:

CREATE TABLE `test_result` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `test_instance_id` bigint(20) unsigned NOT NULL,
  `test_instance_test_station_id` varchar(15) NOT NULL,
  `test_instance_unit_sn` varchar(30) NOT NULL,
  `test_instance_contact_address_id` int(2) NOT NULL,
  `testStep` varchar(45) DEFAULT NULL,
  `testData` blob,
  `externalDataLink` text,
  PRIMARY KEY (`id`),
  KEY `fk_test_result_test_instance1_idx` (`test_instance_id`,`test_instance_test_station_id`,`test_instance_unit_sn`,`test_instance_contact_address_id`),
  CONSTRAINT `fk_test_result_test_instance1` FOREIGN KEY (`test_instance_id`, `test_instance_test_station_id`, `test_instance_unit_sn`, `test_instance_contact_address_id`) REFERENCES `test_instance` (`id`, `test_station_id`, `unit_sn`, `contact_address_id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8

And here's the URL (with the big data truncated):

/post-test-result/testId/116/locationId/99/stationId/BO-01/sn/991807000003/stepName/test2/testData/[2K's worth of data here]/dataUrl/bye2.json?limit=29

The query defined through 'SQL Pass-thru':

insert into test_result (test_instance_id, test_instance_contact_address_id, test_instance_test_station_id, test_instance_unit_sn, testStep, testData, externalDataLink) values (:testId, :locationId, :stationId, :sn, :stepName, :testData, :dataUrl);

Would anybody be able to shed any light?

Pang
  • 9,564
  • 146
  • 81
  • 122

1 Answers1

1

Try updaing uwsgi buffer values in /etc/nginx/nginx.conf file

server {
    uwsgi_buffer_size 8k;
    uwsgi_buffers  4 8k;
    uwsgi_busy_buffers_size 16k;

# ... #

and /etc/slashdb/slashdb.ini file, at the end of section [uwsgi] add buffer-size = 32768. Section uwsgi should look like:

# uWSGI config for service scriptm starts uWSGI as a daemon
[uwsgi]
socket = 127.0.0.1:8001
virtualenv = /opt/slashdb
daemonize = /var/log/slashdb/uwsgi.log
log-maxsize = 20971520
master = true
enable-threads = true
single-interpreter = true
lazy-apps = true
processes = 1
threads = 2
paste = config:%p
paste-logger = %p
buffer-size = 32768

Then restart services:

sudo service slashdb stop
sudo service slashdb start
sudo service nginx restart

BTW SlashDB at the moment does not reflect BLOB type but if you changed testData column type to text then you would be able to use POST method in Data Discovery which seams more suitable for your use case.

Using curl it would be

curl -v 'http://slashdb.reshareu/db/testing/test_result.json' \
-X POST \
-H 'apikey: your-api-key-here' \
-H 'content-type: application/json' \
--data '{
  "test_instance_test_station_id": "BO-01",
  "test_instance_contact_address_id": 99,
  "test_instance_unit_sn": "991807000003",
  "testStep": "test2",
  "externalDataLink": "bye2",
  "test_instance_id": 116,
  "testData": "Very long yata, yata, yata..."
}'
mdob
  • 2,224
  • 3
  • 22
  • 25
  • The uwsgi buffer values don't work, unfortunately. However, I notice something strange as well. On my SlashDB installation, uwsgi doesn't run. $ sudo systemctl status uwsgi ● uwsgi.service Loaded: not-found (Reason: No such file or directory) Active: inactive (dead) – Antonius Gan Mar 09 '18 at 18:24
  • I've reproduced your error in my env. End uwsgi_buffer size worked for url around 3k characters long. Have you reloaded nginx after making changes? To see if SlashDB is running try `ps uax | grep uwsgi` and see if the process is running. BTW, how did you install SlashDB script, system package, docker? Which version of SlashDB are you using? – mdob Mar 10 '18 at 19:36
  • Thanks for taking the time to help me with this. I installed SlashDB as a system package, as prescribed on the website. It's version 0.9.53. Yes, I restarted SlashDB and memcached after I changed the configuration file. I even tried rebooting the virtual machine. `ps uax | grep uwsgi` came up empty even when SlashDB was running and responding. This issue crops up on all my development machine, by the way (I have 2). – Antonius Gan Mar 10 '18 at 23:53
  • OK, I forgot to write about change in uwsgi settings in /etc/slashdb/slashdb.ini. I've updated my answer. – mdob Mar 12 '18 at 09:06
  • If `ps aux | grep uwsgi` doesn't return anything then SlashDB isn't running. When visiting main page you should be getting `502 Bad Gateway`. Unless you've setup caching then responses could be coming from nginx cache.` Also take a look of nginx logs, they're in `/var/log/nginx` to what's actual reason for 502 error. – mdob Mar 12 '18 at 09:17
  • mdob, your latest update to slashdb.ini works! The curl for the 'Data Discovery' mode works as well. Now, we have a couple of ways to do this. Further clarification: 1. How does the buffer-size in slashdb.ini relate to the uwsgi_buffer_size in nginx.conf? 2. Is this information in the SlashDB documentation? I'd like to learn more. 3. Is it possible to put data in the body of a request (as in your curl example), but through SlashDB's SQL Pass-thru feature? Thanks a bunch for the help! – Antonius Gan Mar 12 '18 at 19:02
  • @AntoniusGan In general nginx which works as reverse proxy and uwsgi as application server. Both limit header size to something around 4-8KB, perhaps to maintain stability or prevent http header injection or simply to save memory. It's very unusual to use very long urls (over 2000 characetrs). Have a look at http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html#uwsgi_buffer_size about and https://uwsgi-docs-additions.readthedocs.io/en/latest/Options.html#buffer-size – mdob Mar 14 '18 at 09:34