0

My application (industrial automation) uses SQL Server 2017 Standard Edition on a Dell T330 server, has the configuration:

  • Xeon E3-1200 v6
  • 16gb DDR4 UDIMMs
  • 2 x 2tb HD 7200RPM (Raid 1)

In this bank, I am saving the following tables:

Table: tableHistory
Insert Range: Every 2 seconds
410 columns type float
409 columns type int

--

Table: tableHistoryLong
Insert Range: Every 10 minutes
410 columns type float
409 columns type int

--

Table: tableHistoryMotors
Insert Range: Every 2 seconds
328 columns type float
327 columns type int

--

Table: tableHistoryMotorsLong
Insert Range: Every 10 minutes
328 columns type float
327 columns type int

--

Table: tableEnergy
Insert Range: Every 700 milliseconds
220 columns type float
219 columns type int

Note:

When I generate reports / graphs, my application inserts the inclusions in the buffer. Because the system cannot insert and consult at the same time. Because queries are well loaded.

A columns, they are values ​​of current, temperature, level, etc. This information is recorded for one year.

Question

With this level of processing can I have any performance problems?

Do I need better hardware due to high demand?

Can my application break at some point due to the hardware?

  • We have no idea. The generic answer is yes, mostly because the questions re generic. Yes, you CAN have performance problems, if you define performance strict enough. Yes, you need better hardware for high demand, if you define high demand high enough. Yes, it WILL break at some point due to hardware - at the end, hardware DOES fail at some point, so every application on a server fails at some point due to hardware. – TomTom Sep 04 '20 at 18:50
  • note that for the first 2 questions the CAN part is basically "can" not "will" - and we do not know what you throw at it, so it CAN fail if enough people use it. You also have low reserves due to the use of absolutely db unsuitable drives that are only ok for VERY low end use. But mostly without a lot of details the only answer is always yes here. – TomTom Sep 04 '20 at 18:51
  • The problem is that, during queries, the server is very slow. I agree that the question is very general. But I have no basis for what would be ideal at this point. – user3512987 Sep 04 '20 at 19:09
  • 3
    Well ,start firing the person who tought putting a database on 7200 RPM Discs would scale. On something that small I would go all in SSD - 1000 times the IO performance. – TomTom Sep 04 '20 at 19:16
  • Another issue, I was forced to limit the use of SQL Server ram to 13 gb. I will have to check the possibility of an update. – user3512987 Sep 04 '20 at 19:27
  • Would you have any idea how much RAM I would need? – user3512987 Sep 04 '20 at 19:31
  • 1
    Please consider columnstores – Conor Cunningham MSFT Sep 04 '20 at 19:31

1 Answers1

0

Your question may be closed as too broad but I want to elaborate more on the comments and offer additional suggestions.

How much RAM you need for adequate performance depends on the reporting queries. Factors include the number of rows touched, execution plan operators (sort, hash, etc.), number of concurrent queries. More RAM can also improve performance by avoiding IO, especially costly with spinning media.

A reporting workload (large scans) against a 1-2TB database with traditional tables needs fast storage (SSD) and/or more RAM (hundreds of GB) to provide decent performance. The existing hardware is the worst case scenario because data are unlikely to be cached with only 16GB RAM and a singe spindle can only read about 150MB per second. Based on my rough calculation of the schema in your question, a monthly summary query of tblHistory will take about a minute just to scan 10 GB of data (assuming a clustered index on a date column). Query duration will increase with the number of concurrent queries such that it would take at least 5 minutes per query with 5 concurrent users running the same query due to disk bandwidth limitations. SSD storage can sustain multiple GB per second so, with the same query and RAM, a data transfer time for the query above will take under 5 seconds.

A columnstore (e.g. a clustered columnstore index) as suggested by @ConorCunninghamMSFT will reduce the amount of data transferred from storage greatly because only data for the columns specified in the query are read and inherent columnstore compression will reduce both the size of data on disk and the amount transferred from disk. The compression savings will depend much on the actual column values but I'd expect 50 to 90 percent less space compared to a rowstore table.

Reporting queries against measurement data are likely to specify date range criteria so partitioning the columnstore by date will limit scans to the specified date range without a traditional b-tree index. Partitioning will also also facilitate purging for the 12-month retention criteria with sliding window partition maintenenace (partition TRUNCATE, MERGE, SPLIT) and thereby greatly improve performance of the process compared to a delete query.

Dan Guzman
  • 43,250
  • 3
  • 46
  • 71