1

I'm trying to make a near "real time" graph using canvasjs. I'm using Raspberry Pi to measure data and send it to external virtual server's database MariaDB.

With RPI I'm sending n (count from 1 to infinity), napatie (voltage, value from SPI), cas (time). It' pretty slow cuz I'm sending just 1000 samples per second. With just writing to .txt file not sending to database I'm getting around 30k samples per sec. CODE from RPI:

#include <wiringPi.h>
#include <stdio.h>
#include <string.h>
#include <wiringPiSPI.h>
#include <iostream>
#include <stdint.h>
#include <stdlib.h>
#include <mariadb/mysql.h>
#include <string>
#include <ctime>
#include <date.h>
using namespace std;

int main(){

using namespace date;
using namespace std::Chrono;

time_t raw time;
  struct tm * timeinfo;
  char buffer [80];

//CONNECT TO DB

MYSQL *conn;

  if ((conn = mysql_init(NULL)) == NULL)
  {
    fprintf(stderr, "Could not init DB\n");
    return EXIT_FAILURE;
  }
  if (mysql_real_connect(conn, "IP", "name", "pass", "dbname", 0, NULL, 0) == NULL)
  {
    fprintf(stderr, "DB Connection Error\n");
    return EXIT_FAILURE;
  }

//READ FROM SPI
wiringPiSPISetup(0,2000000);
int i=1;
wiringPiSetup();
std::time_t now = std::time(0);

while (1){
uint8_t spiData [2] ;
wiringPiSPIDataRW (0, spiData, 2) ;
int MSB_1 = spiData[1];
MSB_1 = MSB_1 >> 1;
int MSB_0 = spiData[0] & 0b00011111;
MSB_0 = MSB_0 << 7;
int a = MSB_1 + MSB_0;
float b = ((5.0 *(float)a)/ 4096.0);

time (&rawtime);
timeinfo = localtime (&rawtime);
strftime (buffer, 80,"%Y-%m-%d %H:%M:%S",timeinfo);

//INSERT TO DB
string query = "INSERT INTO tabulka (n, napatie,cas) VALUES ("+to_string(i)+","+to_string(b)+",'"+buffer+"')";

i++;

if ( mysql_query(conn, query.c_str()) !=0)
 {
  fprintf(stderr, "Query Failure\n");
  return EXIT_FAILURE;
 }
delayMicroseconds(10);
}
mysql_close(conn);
return 0;
}

VIRTUAL SERVER SIDE:

To refresh page I'm using simple <meta http-equiv="refresh" content="1" >

CODE to make graph on virtual server:

<?php

//CONNECT TO DB AND READ
$dataPoints = array();

$conn = mysqli_connect('127.0.0.1', 'name', 'pass', 'dbname');
if ($conn->connect_error) {
        die("Connection error: " . $conn->connect_error);
}

$result = $conn->query("select n, napatie from hodnoty.tabulka");

if ($result->num_rows > 0) {
        while ($row = $result->fetch_assoc()) {
            //$dataPoints[] = $row;
            array_push($dataPoints, array("x"=> $row['n'], "y"=> $row['napatie']));


    }
}


//MAKE GRAPH

?>
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="refresh" content="1" >
<script>
window.onload = function() {

var dataPoints = <?php echo json_encode($dataPoints, JSON_NUMERIC_CHECK); ?>;
var dataLength = 1000; 
var chart = new CanvasJS.Chart("chartContainer", {
    theme: "light2",
    title: {
        text: "NAPATIE NA VSTUPE DO RPI"
    },
    axisX:{
        title: "n [/-]"
    },
    axisY:{
        includeZero: false,
        suffix: "U [V]"
    },
    data: [{
        type: "line",
        yValueFormatString: "#,##0.0#",
        toolTipContent: "{y} Mbps",
        dataPoints: dataPoints 
    }]
});

    if (dataPoints.length > dataLength) {       // i think this dont work
        dataPoints.shift();
    }



chart.render();

}
</script>
</head>
<body>
<div id="chartContainer" style="height: 370px; width: 100%;"></div>
<script src="https://canvasjs.com/assets/script/canvasjs.min.js"></script>
</body>
</html>

To delete old data I'm using MariaDB EVENT like this:

CREATE EVENT mazanie ON SCHEDULE EVERY 1 SECOND ENABLE DO DELETE FROM tabulka WHERE 'cas' < CURRENT_TIMESTAMP - INTERVAL 5 SECOND; Column cas has data type timestamp.

AND FINALLY, the problem is, I want to see always at least the same number of values in the graph, the created event is deleting rows in a strange way, once I have around 150 values and sometimes around 1500. Let's say I want to always see around 1000 samples.

HERE YOU CAN SEE: http://147.232.175.92/info_1.php

Mujahid Bhoraniya
  • 1,518
  • 10
  • 22
SimonLi
  • 13
  • 4
  • Please read https://stackoverflow.com/q/58808332/1839439 – Dharman Feb 25 '20 at 09:02
  • 1
    Thank you for letting me know. I checked my config file and display_errors is set to OFF. Also i commented out the part where i try to manually show connection error. – SimonLi Feb 25 '20 at 11:22

1 Answers1

0

Change the code to collect 10 values, building a batch insert (INSERT ... VALUES (...), (...), ... (...);), then issuing that query. This may increase your capacity from 1K to 5K rows per second.

Does the EVENT compete with the insertion script? That is, are they touching the same table? Instead, have the script do the DELETE and tack on LIMIT 1000.

If you can tolerate the delay of batching 10 values, try 100 instead; this will run even faster. Also tweak the LIMIT.

The DELETE could replace every 10th delayMicroseconds(10);. This adds another thing to tweak.

More Rationale:

  • Because of overhead and transactional ATOMicity, inserting 100 rows at a time is about 10 times as fast as 1 at a time. (Going beyond 100 is getting into 'diminishing returns'.)
  • The DELETE is deleting a varying number of rows. When it deletes lots of rows, it is having more impact on the INSERTs and SELECTs, possibly leading to the apparent gaps in the data.
  • 10us delay is so much faster than human perception that it seems unreasonable to go that low.
  • Again, for humans, consider freezing the graph at +/-10 units, thereby making it easier for a user to compare one page to the next.
  • And send the actual times so that the X axis will be correctly spread out, not evenly spread out. That way you can see the gaps and perhaps discover what pattern they follow.

Get most of that done, then come back for more critique and tuning.

Rick James
  • 135,179
  • 13
  • 127
  • 222