Module: drivers/loggers/mysql

When this driver is active, the packets transiting on the bus are stored using SQL database (mysql). This database could be located on the system drive, an USB key or an external mysql database.

Logger parameters

The mysql logger parameters are defined in the logger json (refer to core/loggers module)

{
	id: 1,
	name: 'mysql usb',
	description: 'logs mysql',
	driver: 'mysql.js', //link to the Javascript driver file used to log values.
	activ: 1,
	state: 'connected',  //The state can be "disconnected", "connecting", or "connected"
	json: {  // logger driver parameters
		storage: 'usb', // tries to find the database on the usb stick, and starts the unix mysqld process. 
						// the storage could also be "external" to log on a remote mysql database.
		
		info : {  	//if the storage is "external", then additional information needs to be specified.
					//these parameters are documented in the nodejs mysql driver : https://github.com/mysqljs/mysql
			host            : '54.33.22.114',
			port            :  3306,
			user            : 'root',
			password        : 'my password',
			database        : 'weble'
		}
	}
}


Database tables

  • logs : raw logs. you can choose in the administration interface for each data point the log frequency or to log every time it changes. The logs table contains the data point address, raw packet (hexadecimal string), and the decoded value (numerical).
For performance purpose (long period charts) there is also tables with precomputed aggregated data points (avg, min, max, number of samples):
  • logs_quarterly (logs aggregated every 15 minutes).
  • logs_3hourly
  • logs_daily
  • logs_weekly
  • logs_monthly
All the tables are automatically partitionned with a period of time (usually month or year). The oldest data (few years) is automatically dropped. Note that it is possible to query data with an aggregate period that does not match directly the aggregate period of the mysql table. For example if the driver is queried with an aggregate interval of 20 minutes, it would reconstitue the average every 20 minutes by combining queries on the logs table (raw data) and on the pre-computed logs_quarterly table. The results rows are merged and streamed as a series of callbacks.

Average computation

The average aggregate is computed over time. As shown in the schema below, the last known value is valid until a new one is logged (e.g. there is no linear interpolation for computing the average).

 _________________
 value 6         |            ________________________
                 |           |  value 5              |
                 |___________|                       |
                  value 3                            |
                                                     |__________
                                                       value 2  
 ________________|___________|_______________________|__________  time (minutes)
0               16          27                      50         60

average = (6*16 + 11*3 +  23*5  + 2*10)  /  60